00:00:00.000 Started by upstream project "autotest-per-patch" build number 126127 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.105 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.105 The recommended git tool is: git 00:00:00.106 using credential 00000000-0000-0000-0000-000000000002 00:00:00.108 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.166 Fetching changes from the remote Git repository 00:00:00.169 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.233 Using shallow fetch with depth 1 00:00:00.233 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.233 > git --version # timeout=10 00:00:00.292 > git --version # 'git version 2.39.2' 00:00:00.292 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.335 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.335 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.274 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.286 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.300 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.300 > git config core.sparsecheckout # timeout=10 00:00:06.312 > git read-tree -mu HEAD # timeout=10 00:00:06.329 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.351 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.351 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:06.440 [Pipeline] Start of Pipeline 00:00:06.452 [Pipeline] library 00:00:06.453 Loading library shm_lib@master 00:00:06.453 Library shm_lib@master is cached. Copying from home. 00:00:06.471 [Pipeline] node 00:00:06.483 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.484 [Pipeline] { 00:00:06.492 [Pipeline] catchError 00:00:06.493 [Pipeline] { 00:00:06.502 [Pipeline] wrap 00:00:06.509 [Pipeline] { 00:00:06.517 [Pipeline] stage 00:00:06.518 [Pipeline] { (Prologue) 00:00:06.700 [Pipeline] sh 00:00:07.026 + logger -p user.info -t JENKINS-CI 00:00:07.050 [Pipeline] echo 00:00:07.052 Node: GP8 00:00:07.060 [Pipeline] sh 00:00:07.364 [Pipeline] setCustomBuildProperty 00:00:07.379 [Pipeline] echo 00:00:07.381 Cleanup processes 00:00:07.387 [Pipeline] sh 00:00:07.667 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.667 546641 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.681 [Pipeline] sh 00:00:07.965 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.966 ++ grep -v 'sudo pgrep' 00:00:07.966 ++ awk '{print $1}' 00:00:07.966 + sudo kill -9 00:00:07.966 + true 00:00:07.982 [Pipeline] cleanWs 00:00:07.991 [WS-CLEANUP] Deleting project workspace... 00:00:07.991 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.998 [WS-CLEANUP] done 00:00:08.001 [Pipeline] setCustomBuildProperty 00:00:08.013 [Pipeline] sh 00:00:08.296 + sudo git config --global --replace-all safe.directory '*' 00:00:08.372 [Pipeline] httpRequest 00:00:08.437 [Pipeline] echo 00:00:08.438 Sorcerer 10.211.164.101 is alive 00:00:08.445 [Pipeline] httpRequest 00:00:08.449 HttpMethod: GET 00:00:08.450 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.451 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.456 Response Code: HTTP/1.1 200 OK 00:00:08.456 Success: Status code 200 is in the accepted range: 200,404 00:00:08.457 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:18.580 [Pipeline] sh 00:00:18.861 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:18.878 [Pipeline] httpRequest 00:00:18.917 [Pipeline] echo 00:00:18.918 Sorcerer 10.211.164.101 is alive 00:00:18.925 [Pipeline] httpRequest 00:00:18.929 HttpMethod: GET 00:00:18.930 URL: http://10.211.164.101/packages/spdk_25161080d9161b3ac5736003e4d7dc657855cdef.tar.gz 00:00:18.931 Sending request to url: http://10.211.164.101/packages/spdk_25161080d9161b3ac5736003e4d7dc657855cdef.tar.gz 00:00:18.940 Response Code: HTTP/1.1 200 OK 00:00:18.941 Success: Status code 200 is in the accepted range: 200,404 00:00:18.941 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_25161080d9161b3ac5736003e4d7dc657855cdef.tar.gz 00:02:14.072 [Pipeline] sh 00:02:14.356 + tar --no-same-owner -xf spdk_25161080d9161b3ac5736003e4d7dc657855cdef.tar.gz 00:02:16.896 [Pipeline] sh 00:02:17.181 + git -C spdk log --oneline -n5 00:02:17.181 25161080d spdk_nvme_perf: allocate buffers from socket_id reported by ctrlr 00:02:17.181 26acb15a6 nvme/pcie: allocate cq from device-local numa node's memory 00:02:17.181 be7837808 bdev/nvme: show `numa_socket_id` for bdev_nvme_get_controllers 00:02:17.181 cf710e481 nvme: populate socket_id for rdma controllers 00:02:17.181 f1ebf4106 nvme: populate socket_id for tcp controllers 00:02:17.193 [Pipeline] } 00:02:17.209 [Pipeline] // stage 00:02:17.218 [Pipeline] stage 00:02:17.220 [Pipeline] { (Prepare) 00:02:17.240 [Pipeline] writeFile 00:02:17.257 [Pipeline] sh 00:02:17.540 + logger -p user.info -t JENKINS-CI 00:02:17.553 [Pipeline] sh 00:02:17.835 + logger -p user.info -t JENKINS-CI 00:02:17.846 [Pipeline] sh 00:02:18.130 + cat autorun-spdk.conf 00:02:18.130 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.130 SPDK_TEST_NVMF=1 00:02:18.130 SPDK_TEST_NVME_CLI=1 00:02:18.130 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.130 SPDK_TEST_NVMF_NICS=e810 00:02:18.130 SPDK_TEST_VFIOUSER=1 00:02:18.130 SPDK_RUN_UBSAN=1 00:02:18.130 NET_TYPE=phy 00:02:18.137 RUN_NIGHTLY=0 00:02:18.141 [Pipeline] readFile 00:02:18.166 [Pipeline] withEnv 00:02:18.168 [Pipeline] { 00:02:18.182 [Pipeline] sh 00:02:18.466 + set -ex 00:02:18.466 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:18.466 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:18.466 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:18.466 ++ SPDK_TEST_NVMF=1 00:02:18.466 ++ SPDK_TEST_NVME_CLI=1 00:02:18.466 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:18.466 ++ SPDK_TEST_NVMF_NICS=e810 00:02:18.466 ++ SPDK_TEST_VFIOUSER=1 00:02:18.466 ++ SPDK_RUN_UBSAN=1 00:02:18.466 ++ NET_TYPE=phy 00:02:18.466 ++ RUN_NIGHTLY=0 00:02:18.466 + case $SPDK_TEST_NVMF_NICS in 00:02:18.466 + DRIVERS=ice 00:02:18.466 + [[ tcp == \r\d\m\a ]] 00:02:18.466 + [[ -n ice ]] 00:02:18.466 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:18.466 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:18.466 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:18.466 rmmod: ERROR: Module irdma is not currently loaded 00:02:18.466 rmmod: ERROR: Module i40iw is not currently loaded 00:02:18.466 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:18.466 + true 00:02:18.466 + for D in $DRIVERS 00:02:18.466 + sudo modprobe ice 00:02:18.466 + exit 0 00:02:18.475 [Pipeline] } 00:02:18.493 [Pipeline] // withEnv 00:02:18.499 [Pipeline] } 00:02:18.516 [Pipeline] // stage 00:02:18.526 [Pipeline] catchError 00:02:18.528 [Pipeline] { 00:02:18.541 [Pipeline] timeout 00:02:18.541 Timeout set to expire in 50 min 00:02:18.542 [Pipeline] { 00:02:18.555 [Pipeline] stage 00:02:18.557 [Pipeline] { (Tests) 00:02:18.573 [Pipeline] sh 00:02:18.856 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:18.856 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:18.856 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:18.856 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:18.856 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.856 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:18.856 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:18.856 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:18.856 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:18.856 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:18.856 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:18.856 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:18.856 + source /etc/os-release 00:02:18.856 ++ NAME='Fedora Linux' 00:02:18.856 ++ VERSION='38 (Cloud Edition)' 00:02:18.856 ++ ID=fedora 00:02:18.856 ++ VERSION_ID=38 00:02:18.856 ++ VERSION_CODENAME= 00:02:18.856 ++ PLATFORM_ID=platform:f38 00:02:18.856 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:18.856 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:18.856 ++ LOGO=fedora-logo-icon 00:02:18.856 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:18.856 ++ HOME_URL=https://fedoraproject.org/ 00:02:18.856 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:18.856 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:18.856 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:18.856 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:18.856 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:18.856 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:18.856 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:18.856 ++ SUPPORT_END=2024-05-14 00:02:18.856 ++ VARIANT='Cloud Edition' 00:02:18.856 ++ VARIANT_ID=cloud 00:02:18.856 + uname -a 00:02:18.856 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:18.857 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:19.791 Hugepages 00:02:19.791 node hugesize free / total 00:02:19.791 node0 1048576kB 0 / 0 00:02:20.049 node0 2048kB 0 / 0 00:02:20.049 node1 1048576kB 0 / 0 00:02:20.049 node1 2048kB 0 / 0 00:02:20.049 00:02:20.049 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:20.049 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:20.049 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:20.049 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:20.049 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:20.049 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:20.049 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:20.049 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:20.049 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:20.049 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:20.049 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:20.049 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:20.049 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:20.049 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:20.049 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:20.049 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:20.049 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:20.049 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:20.049 + rm -f /tmp/spdk-ld-path 00:02:20.049 + source autorun-spdk.conf 00:02:20.049 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.049 ++ SPDK_TEST_NVMF=1 00:02:20.049 ++ SPDK_TEST_NVME_CLI=1 00:02:20.049 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.049 ++ SPDK_TEST_NVMF_NICS=e810 00:02:20.049 ++ SPDK_TEST_VFIOUSER=1 00:02:20.049 ++ SPDK_RUN_UBSAN=1 00:02:20.049 ++ NET_TYPE=phy 00:02:20.049 ++ RUN_NIGHTLY=0 00:02:20.049 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.049 + [[ -n '' ]] 00:02:20.049 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:20.049 + for M in /var/spdk/build-*-manifest.txt 00:02:20.049 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:20.049 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:20.049 + for M in /var/spdk/build-*-manifest.txt 00:02:20.049 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:20.049 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:20.049 ++ uname 00:02:20.049 + [[ Linux == \L\i\n\u\x ]] 00:02:20.049 + sudo dmesg -T 00:02:20.049 + sudo dmesg --clear 00:02:20.049 + dmesg_pid=547962 00:02:20.049 + [[ Fedora Linux == FreeBSD ]] 00:02:20.049 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.049 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.049 + sudo dmesg -Tw 00:02:20.049 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:20.049 + [[ -x /usr/src/fio-static/fio ]] 00:02:20.049 + export FIO_BIN=/usr/src/fio-static/fio 00:02:20.049 + FIO_BIN=/usr/src/fio-static/fio 00:02:20.049 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:20.049 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:20.049 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:20.049 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.049 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.049 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:20.049 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.049 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.049 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:20.049 Test configuration: 00:02:20.049 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.049 SPDK_TEST_NVMF=1 00:02:20.049 SPDK_TEST_NVME_CLI=1 00:02:20.049 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.049 SPDK_TEST_NVMF_NICS=e810 00:02:20.049 SPDK_TEST_VFIOUSER=1 00:02:20.049 SPDK_RUN_UBSAN=1 00:02:20.049 NET_TYPE=phy 00:02:20.049 RUN_NIGHTLY=0 15:39:17 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:20.049 15:39:17 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:20.049 15:39:17 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.049 15:39:17 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.049 15:39:17 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.049 15:39:17 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.049 15:39:17 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.049 15:39:17 -- paths/export.sh@5 -- $ export PATH 00:02:20.049 15:39:17 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.049 15:39:17 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:20.049 15:39:17 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:20.049 15:39:17 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720791557.XXXXXX 00:02:20.049 15:39:17 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720791557.0kj3BV 00:02:20.049 15:39:17 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:20.049 15:39:17 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:20.049 15:39:17 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:20.049 15:39:17 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:20.049 15:39:17 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:20.309 15:39:17 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:20.309 15:39:17 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:20.309 15:39:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.309 15:39:17 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:20.309 15:39:17 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:20.309 15:39:17 -- pm/common@17 -- $ local monitor 00:02:20.309 15:39:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.309 15:39:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.309 15:39:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.309 15:39:17 -- pm/common@21 -- $ date +%s 00:02:20.309 15:39:17 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.309 15:39:17 -- pm/common@21 -- $ date +%s 00:02:20.309 15:39:17 -- pm/common@25 -- $ sleep 1 00:02:20.309 15:39:17 -- pm/common@21 -- $ date +%s 00:02:20.309 15:39:17 -- pm/common@21 -- $ date +%s 00:02:20.309 15:39:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720791557 00:02:20.309 15:39:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720791557 00:02:20.309 15:39:17 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720791557 00:02:20.309 15:39:17 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720791557 00:02:20.309 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720791557_collect-vmstat.pm.log 00:02:20.309 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720791557_collect-cpu-load.pm.log 00:02:20.309 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720791557_collect-cpu-temp.pm.log 00:02:20.309 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720791557_collect-bmc-pm.bmc.pm.log 00:02:21.245 15:39:18 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:21.245 15:39:18 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:21.245 15:39:18 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:21.245 15:39:18 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:21.245 15:39:18 -- spdk/autobuild.sh@16 -- $ date -u 00:02:21.245 Fri Jul 12 01:39:18 PM UTC 2024 00:02:21.245 15:39:18 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:21.245 v24.09-pre-228-g25161080d 00:02:21.245 15:39:18 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:21.245 15:39:18 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:21.245 15:39:18 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:21.245 15:39:18 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:21.245 15:39:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:21.245 15:39:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.245 ************************************ 00:02:21.245 START TEST ubsan 00:02:21.245 ************************************ 00:02:21.245 15:39:18 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:21.245 using ubsan 00:02:21.245 00:02:21.245 real 0m0.000s 00:02:21.245 user 0m0.000s 00:02:21.245 sys 0m0.000s 00:02:21.245 15:39:18 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:21.245 15:39:18 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:21.245 ************************************ 00:02:21.245 END TEST ubsan 00:02:21.245 ************************************ 00:02:21.245 15:39:18 -- common/autotest_common.sh@1142 -- $ return 0 00:02:21.245 15:39:18 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:21.245 15:39:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:21.245 15:39:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:21.245 15:39:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:21.245 15:39:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:21.245 15:39:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:21.245 15:39:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:21.245 15:39:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:21.245 15:39:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:21.245 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:21.245 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:21.808 Using 'verbs' RDMA provider 00:02:32.371 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:42.353 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:42.353 Creating mk/config.mk...done. 00:02:42.353 Creating mk/cc.flags.mk...done. 00:02:42.353 Type 'make' to build. 00:02:42.353 15:39:39 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:42.353 15:39:39 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:42.353 15:39:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:42.353 15:39:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:42.353 ************************************ 00:02:42.353 START TEST make 00:02:42.353 ************************************ 00:02:42.353 15:39:39 make -- common/autotest_common.sh@1123 -- $ make -j48 00:02:42.353 make[1]: Nothing to be done for 'all'. 00:02:44.269 The Meson build system 00:02:44.269 Version: 1.3.1 00:02:44.269 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:44.269 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:44.269 Build type: native build 00:02:44.269 Project name: libvfio-user 00:02:44.269 Project version: 0.0.1 00:02:44.269 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:44.269 C linker for the host machine: cc ld.bfd 2.39-16 00:02:44.269 Host machine cpu family: x86_64 00:02:44.269 Host machine cpu: x86_64 00:02:44.269 Run-time dependency threads found: YES 00:02:44.269 Library dl found: YES 00:02:44.269 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:44.269 Run-time dependency json-c found: YES 0.17 00:02:44.269 Run-time dependency cmocka found: YES 1.1.7 00:02:44.269 Program pytest-3 found: NO 00:02:44.269 Program flake8 found: NO 00:02:44.269 Program misspell-fixer found: NO 00:02:44.269 Program restructuredtext-lint found: NO 00:02:44.269 Program valgrind found: YES (/usr/bin/valgrind) 00:02:44.269 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:44.269 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:44.269 Compiler for C supports arguments -Wwrite-strings: YES 00:02:44.269 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:44.269 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:44.270 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:44.270 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:44.270 Build targets in project: 8 00:02:44.270 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:44.270 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:44.270 00:02:44.270 libvfio-user 0.0.1 00:02:44.270 00:02:44.270 User defined options 00:02:44.270 buildtype : debug 00:02:44.270 default_library: shared 00:02:44.270 libdir : /usr/local/lib 00:02:44.270 00:02:44.270 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.844 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:44.844 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:44.844 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:44.844 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:45.107 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:45.107 [5/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:45.107 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:45.107 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:45.107 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:45.107 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:45.107 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:45.107 [11/37] Compiling C object samples/null.p/null.c.o 00:02:45.107 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:45.107 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:45.107 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:45.107 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:45.107 [16/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:45.107 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:45.107 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:45.107 [19/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:45.107 [20/37] Compiling C object samples/server.p/server.c.o 00:02:45.107 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:45.107 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:45.107 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:45.107 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:45.107 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:45.367 [26/37] Compiling C object samples/client.p/client.c.o 00:02:45.367 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:45.367 [28/37] Linking target samples/client 00:02:45.367 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:45.367 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:45.367 [31/37] Linking target test/unit_tests 00:02:45.628 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:45.628 [33/37] Linking target samples/null 00:02:45.628 [34/37] Linking target samples/server 00:02:45.628 [35/37] Linking target samples/gpio-pci-idio-16 00:02:45.628 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:45.628 [37/37] Linking target samples/lspci 00:02:45.628 INFO: autodetecting backend as ninja 00:02:45.628 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:45.628 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:46.571 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:46.571 ninja: no work to do. 00:02:51.841 The Meson build system 00:02:51.841 Version: 1.3.1 00:02:51.841 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:51.841 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:51.841 Build type: native build 00:02:51.841 Program cat found: YES (/usr/bin/cat) 00:02:51.841 Project name: DPDK 00:02:51.841 Project version: 24.03.0 00:02:51.841 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:51.841 C linker for the host machine: cc ld.bfd 2.39-16 00:02:51.841 Host machine cpu family: x86_64 00:02:51.841 Host machine cpu: x86_64 00:02:51.841 Message: ## Building in Developer Mode ## 00:02:51.841 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:51.841 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:51.841 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:51.841 Program python3 found: YES (/usr/bin/python3) 00:02:51.841 Program cat found: YES (/usr/bin/cat) 00:02:51.841 Compiler for C supports arguments -march=native: YES 00:02:51.841 Checking for size of "void *" : 8 00:02:51.841 Checking for size of "void *" : 8 (cached) 00:02:51.841 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:51.841 Library m found: YES 00:02:51.841 Library numa found: YES 00:02:51.841 Has header "numaif.h" : YES 00:02:51.841 Library fdt found: NO 00:02:51.841 Library execinfo found: NO 00:02:51.841 Has header "execinfo.h" : YES 00:02:51.841 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:51.841 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:51.841 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:51.841 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:51.841 Run-time dependency openssl found: YES 3.0.9 00:02:51.841 Run-time dependency libpcap found: YES 1.10.4 00:02:51.841 Has header "pcap.h" with dependency libpcap: YES 00:02:51.841 Compiler for C supports arguments -Wcast-qual: YES 00:02:51.841 Compiler for C supports arguments -Wdeprecated: YES 00:02:51.841 Compiler for C supports arguments -Wformat: YES 00:02:51.841 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:51.841 Compiler for C supports arguments -Wformat-security: NO 00:02:51.841 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:51.841 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:51.841 Compiler for C supports arguments -Wnested-externs: YES 00:02:51.841 Compiler for C supports arguments -Wold-style-definition: YES 00:02:51.841 Compiler for C supports arguments -Wpointer-arith: YES 00:02:51.841 Compiler for C supports arguments -Wsign-compare: YES 00:02:51.841 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:51.841 Compiler for C supports arguments -Wundef: YES 00:02:51.841 Compiler for C supports arguments -Wwrite-strings: YES 00:02:51.841 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:51.841 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:51.841 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:51.842 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:51.842 Program objdump found: YES (/usr/bin/objdump) 00:02:51.842 Compiler for C supports arguments -mavx512f: YES 00:02:51.842 Checking if "AVX512 checking" compiles: YES 00:02:51.842 Fetching value of define "__SSE4_2__" : 1 00:02:51.842 Fetching value of define "__AES__" : 1 00:02:51.842 Fetching value of define "__AVX__" : 1 00:02:51.842 Fetching value of define "__AVX2__" : (undefined) 00:02:51.842 Fetching value of define "__AVX512BW__" : (undefined) 00:02:51.842 Fetching value of define "__AVX512CD__" : (undefined) 00:02:51.842 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:51.842 Fetching value of define "__AVX512F__" : (undefined) 00:02:51.842 Fetching value of define "__AVX512VL__" : (undefined) 00:02:51.842 Fetching value of define "__PCLMUL__" : 1 00:02:51.842 Fetching value of define "__RDRND__" : 1 00:02:51.842 Fetching value of define "__RDSEED__" : (undefined) 00:02:51.842 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:51.842 Fetching value of define "__znver1__" : (undefined) 00:02:51.842 Fetching value of define "__znver2__" : (undefined) 00:02:51.842 Fetching value of define "__znver3__" : (undefined) 00:02:51.842 Fetching value of define "__znver4__" : (undefined) 00:02:51.842 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:51.842 Message: lib/log: Defining dependency "log" 00:02:51.842 Message: lib/kvargs: Defining dependency "kvargs" 00:02:51.842 Message: lib/telemetry: Defining dependency "telemetry" 00:02:51.842 Checking for function "getentropy" : NO 00:02:51.842 Message: lib/eal: Defining dependency "eal" 00:02:51.842 Message: lib/ring: Defining dependency "ring" 00:02:51.842 Message: lib/rcu: Defining dependency "rcu" 00:02:51.842 Message: lib/mempool: Defining dependency "mempool" 00:02:51.842 Message: lib/mbuf: Defining dependency "mbuf" 00:02:51.842 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:51.842 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:51.842 Compiler for C supports arguments -mpclmul: YES 00:02:51.842 Compiler for C supports arguments -maes: YES 00:02:51.842 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:51.842 Compiler for C supports arguments -mavx512bw: YES 00:02:51.842 Compiler for C supports arguments -mavx512dq: YES 00:02:51.842 Compiler for C supports arguments -mavx512vl: YES 00:02:51.842 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:51.842 Compiler for C supports arguments -mavx2: YES 00:02:51.842 Compiler for C supports arguments -mavx: YES 00:02:51.842 Message: lib/net: Defining dependency "net" 00:02:51.842 Message: lib/meter: Defining dependency "meter" 00:02:51.842 Message: lib/ethdev: Defining dependency "ethdev" 00:02:51.842 Message: lib/pci: Defining dependency "pci" 00:02:51.842 Message: lib/cmdline: Defining dependency "cmdline" 00:02:51.842 Message: lib/hash: Defining dependency "hash" 00:02:51.842 Message: lib/timer: Defining dependency "timer" 00:02:51.842 Message: lib/compressdev: Defining dependency "compressdev" 00:02:51.842 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:51.842 Message: lib/dmadev: Defining dependency "dmadev" 00:02:51.842 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:51.842 Message: lib/power: Defining dependency "power" 00:02:51.842 Message: lib/reorder: Defining dependency "reorder" 00:02:51.842 Message: lib/security: Defining dependency "security" 00:02:51.842 Has header "linux/userfaultfd.h" : YES 00:02:51.842 Has header "linux/vduse.h" : YES 00:02:51.842 Message: lib/vhost: Defining dependency "vhost" 00:02:51.842 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:51.842 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:51.842 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:51.842 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:51.842 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:51.842 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:51.842 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:51.842 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:51.842 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:51.842 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:51.842 Program doxygen found: YES (/usr/bin/doxygen) 00:02:51.842 Configuring doxy-api-html.conf using configuration 00:02:51.842 Configuring doxy-api-man.conf using configuration 00:02:51.842 Program mandb found: YES (/usr/bin/mandb) 00:02:51.842 Program sphinx-build found: NO 00:02:51.842 Configuring rte_build_config.h using configuration 00:02:51.842 Message: 00:02:51.842 ================= 00:02:51.842 Applications Enabled 00:02:51.842 ================= 00:02:51.842 00:02:51.842 apps: 00:02:51.842 00:02:51.842 00:02:51.842 Message: 00:02:51.842 ================= 00:02:51.842 Libraries Enabled 00:02:51.842 ================= 00:02:51.842 00:02:51.842 libs: 00:02:51.842 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:51.842 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:51.842 cryptodev, dmadev, power, reorder, security, vhost, 00:02:51.842 00:02:51.842 Message: 00:02:51.842 =============== 00:02:51.842 Drivers Enabled 00:02:51.842 =============== 00:02:51.842 00:02:51.842 common: 00:02:51.842 00:02:51.842 bus: 00:02:51.842 pci, vdev, 00:02:51.842 mempool: 00:02:51.842 ring, 00:02:51.842 dma: 00:02:51.842 00:02:51.842 net: 00:02:51.842 00:02:51.842 crypto: 00:02:51.842 00:02:51.842 compress: 00:02:51.842 00:02:51.842 vdpa: 00:02:51.842 00:02:51.842 00:02:51.842 Message: 00:02:51.842 ================= 00:02:51.842 Content Skipped 00:02:51.842 ================= 00:02:51.842 00:02:51.842 apps: 00:02:51.842 dumpcap: explicitly disabled via build config 00:02:51.842 graph: explicitly disabled via build config 00:02:51.842 pdump: explicitly disabled via build config 00:02:51.842 proc-info: explicitly disabled via build config 00:02:51.842 test-acl: explicitly disabled via build config 00:02:51.842 test-bbdev: explicitly disabled via build config 00:02:51.842 test-cmdline: explicitly disabled via build config 00:02:51.842 test-compress-perf: explicitly disabled via build config 00:02:51.842 test-crypto-perf: explicitly disabled via build config 00:02:51.842 test-dma-perf: explicitly disabled via build config 00:02:51.842 test-eventdev: explicitly disabled via build config 00:02:51.842 test-fib: explicitly disabled via build config 00:02:51.842 test-flow-perf: explicitly disabled via build config 00:02:51.842 test-gpudev: explicitly disabled via build config 00:02:51.842 test-mldev: explicitly disabled via build config 00:02:51.842 test-pipeline: explicitly disabled via build config 00:02:51.842 test-pmd: explicitly disabled via build config 00:02:51.842 test-regex: explicitly disabled via build config 00:02:51.842 test-sad: explicitly disabled via build config 00:02:51.842 test-security-perf: explicitly disabled via build config 00:02:51.842 00:02:51.842 libs: 00:02:51.842 argparse: explicitly disabled via build config 00:02:51.842 metrics: explicitly disabled via build config 00:02:51.842 acl: explicitly disabled via build config 00:02:51.842 bbdev: explicitly disabled via build config 00:02:51.842 bitratestats: explicitly disabled via build config 00:02:51.842 bpf: explicitly disabled via build config 00:02:51.842 cfgfile: explicitly disabled via build config 00:02:51.842 distributor: explicitly disabled via build config 00:02:51.842 efd: explicitly disabled via build config 00:02:51.842 eventdev: explicitly disabled via build config 00:02:51.842 dispatcher: explicitly disabled via build config 00:02:51.842 gpudev: explicitly disabled via build config 00:02:51.842 gro: explicitly disabled via build config 00:02:51.842 gso: explicitly disabled via build config 00:02:51.842 ip_frag: explicitly disabled via build config 00:02:51.842 jobstats: explicitly disabled via build config 00:02:51.842 latencystats: explicitly disabled via build config 00:02:51.842 lpm: explicitly disabled via build config 00:02:51.842 member: explicitly disabled via build config 00:02:51.842 pcapng: explicitly disabled via build config 00:02:51.842 rawdev: explicitly disabled via build config 00:02:51.842 regexdev: explicitly disabled via build config 00:02:51.842 mldev: explicitly disabled via build config 00:02:51.842 rib: explicitly disabled via build config 00:02:51.842 sched: explicitly disabled via build config 00:02:51.842 stack: explicitly disabled via build config 00:02:51.842 ipsec: explicitly disabled via build config 00:02:51.842 pdcp: explicitly disabled via build config 00:02:51.842 fib: explicitly disabled via build config 00:02:51.842 port: explicitly disabled via build config 00:02:51.842 pdump: explicitly disabled via build config 00:02:51.842 table: explicitly disabled via build config 00:02:51.842 pipeline: explicitly disabled via build config 00:02:51.842 graph: explicitly disabled via build config 00:02:51.842 node: explicitly disabled via build config 00:02:51.842 00:02:51.842 drivers: 00:02:51.842 common/cpt: not in enabled drivers build config 00:02:51.842 common/dpaax: not in enabled drivers build config 00:02:51.842 common/iavf: not in enabled drivers build config 00:02:51.842 common/idpf: not in enabled drivers build config 00:02:51.842 common/ionic: not in enabled drivers build config 00:02:51.842 common/mvep: not in enabled drivers build config 00:02:51.842 common/octeontx: not in enabled drivers build config 00:02:51.842 bus/auxiliary: not in enabled drivers build config 00:02:51.842 bus/cdx: not in enabled drivers build config 00:02:51.842 bus/dpaa: not in enabled drivers build config 00:02:51.842 bus/fslmc: not in enabled drivers build config 00:02:51.842 bus/ifpga: not in enabled drivers build config 00:02:51.842 bus/platform: not in enabled drivers build config 00:02:51.842 bus/uacce: not in enabled drivers build config 00:02:51.842 bus/vmbus: not in enabled drivers build config 00:02:51.842 common/cnxk: not in enabled drivers build config 00:02:51.842 common/mlx5: not in enabled drivers build config 00:02:51.842 common/nfp: not in enabled drivers build config 00:02:51.842 common/nitrox: not in enabled drivers build config 00:02:51.842 common/qat: not in enabled drivers build config 00:02:51.842 common/sfc_efx: not in enabled drivers build config 00:02:51.842 mempool/bucket: not in enabled drivers build config 00:02:51.842 mempool/cnxk: not in enabled drivers build config 00:02:51.842 mempool/dpaa: not in enabled drivers build config 00:02:51.842 mempool/dpaa2: not in enabled drivers build config 00:02:51.842 mempool/octeontx: not in enabled drivers build config 00:02:51.842 mempool/stack: not in enabled drivers build config 00:02:51.842 dma/cnxk: not in enabled drivers build config 00:02:51.842 dma/dpaa: not in enabled drivers build config 00:02:51.842 dma/dpaa2: not in enabled drivers build config 00:02:51.842 dma/hisilicon: not in enabled drivers build config 00:02:51.842 dma/idxd: not in enabled drivers build config 00:02:51.842 dma/ioat: not in enabled drivers build config 00:02:51.843 dma/skeleton: not in enabled drivers build config 00:02:51.843 net/af_packet: not in enabled drivers build config 00:02:51.843 net/af_xdp: not in enabled drivers build config 00:02:51.843 net/ark: not in enabled drivers build config 00:02:51.843 net/atlantic: not in enabled drivers build config 00:02:51.843 net/avp: not in enabled drivers build config 00:02:51.843 net/axgbe: not in enabled drivers build config 00:02:51.843 net/bnx2x: not in enabled drivers build config 00:02:51.843 net/bnxt: not in enabled drivers build config 00:02:51.843 net/bonding: not in enabled drivers build config 00:02:51.843 net/cnxk: not in enabled drivers build config 00:02:51.843 net/cpfl: not in enabled drivers build config 00:02:51.843 net/cxgbe: not in enabled drivers build config 00:02:51.843 net/dpaa: not in enabled drivers build config 00:02:51.843 net/dpaa2: not in enabled drivers build config 00:02:51.843 net/e1000: not in enabled drivers build config 00:02:51.843 net/ena: not in enabled drivers build config 00:02:51.843 net/enetc: not in enabled drivers build config 00:02:51.843 net/enetfec: not in enabled drivers build config 00:02:51.843 net/enic: not in enabled drivers build config 00:02:51.843 net/failsafe: not in enabled drivers build config 00:02:51.843 net/fm10k: not in enabled drivers build config 00:02:51.843 net/gve: not in enabled drivers build config 00:02:51.843 net/hinic: not in enabled drivers build config 00:02:51.843 net/hns3: not in enabled drivers build config 00:02:51.843 net/i40e: not in enabled drivers build config 00:02:51.843 net/iavf: not in enabled drivers build config 00:02:51.843 net/ice: not in enabled drivers build config 00:02:51.843 net/idpf: not in enabled drivers build config 00:02:51.843 net/igc: not in enabled drivers build config 00:02:51.843 net/ionic: not in enabled drivers build config 00:02:51.843 net/ipn3ke: not in enabled drivers build config 00:02:51.843 net/ixgbe: not in enabled drivers build config 00:02:51.843 net/mana: not in enabled drivers build config 00:02:51.843 net/memif: not in enabled drivers build config 00:02:51.843 net/mlx4: not in enabled drivers build config 00:02:51.843 net/mlx5: not in enabled drivers build config 00:02:51.843 net/mvneta: not in enabled drivers build config 00:02:51.843 net/mvpp2: not in enabled drivers build config 00:02:51.843 net/netvsc: not in enabled drivers build config 00:02:51.843 net/nfb: not in enabled drivers build config 00:02:51.843 net/nfp: not in enabled drivers build config 00:02:51.843 net/ngbe: not in enabled drivers build config 00:02:51.843 net/null: not in enabled drivers build config 00:02:51.843 net/octeontx: not in enabled drivers build config 00:02:51.843 net/octeon_ep: not in enabled drivers build config 00:02:51.843 net/pcap: not in enabled drivers build config 00:02:51.843 net/pfe: not in enabled drivers build config 00:02:51.843 net/qede: not in enabled drivers build config 00:02:51.843 net/ring: not in enabled drivers build config 00:02:51.843 net/sfc: not in enabled drivers build config 00:02:51.843 net/softnic: not in enabled drivers build config 00:02:51.843 net/tap: not in enabled drivers build config 00:02:51.843 net/thunderx: not in enabled drivers build config 00:02:51.843 net/txgbe: not in enabled drivers build config 00:02:51.843 net/vdev_netvsc: not in enabled drivers build config 00:02:51.843 net/vhost: not in enabled drivers build config 00:02:51.843 net/virtio: not in enabled drivers build config 00:02:51.843 net/vmxnet3: not in enabled drivers build config 00:02:51.843 raw/*: missing internal dependency, "rawdev" 00:02:51.843 crypto/armv8: not in enabled drivers build config 00:02:51.843 crypto/bcmfs: not in enabled drivers build config 00:02:51.843 crypto/caam_jr: not in enabled drivers build config 00:02:51.843 crypto/ccp: not in enabled drivers build config 00:02:51.843 crypto/cnxk: not in enabled drivers build config 00:02:51.843 crypto/dpaa_sec: not in enabled drivers build config 00:02:51.843 crypto/dpaa2_sec: not in enabled drivers build config 00:02:51.843 crypto/ipsec_mb: not in enabled drivers build config 00:02:51.843 crypto/mlx5: not in enabled drivers build config 00:02:51.843 crypto/mvsam: not in enabled drivers build config 00:02:51.843 crypto/nitrox: not in enabled drivers build config 00:02:51.843 crypto/null: not in enabled drivers build config 00:02:51.843 crypto/octeontx: not in enabled drivers build config 00:02:51.843 crypto/openssl: not in enabled drivers build config 00:02:51.843 crypto/scheduler: not in enabled drivers build config 00:02:51.843 crypto/uadk: not in enabled drivers build config 00:02:51.843 crypto/virtio: not in enabled drivers build config 00:02:51.843 compress/isal: not in enabled drivers build config 00:02:51.843 compress/mlx5: not in enabled drivers build config 00:02:51.843 compress/nitrox: not in enabled drivers build config 00:02:51.843 compress/octeontx: not in enabled drivers build config 00:02:51.843 compress/zlib: not in enabled drivers build config 00:02:51.843 regex/*: missing internal dependency, "regexdev" 00:02:51.843 ml/*: missing internal dependency, "mldev" 00:02:51.843 vdpa/ifc: not in enabled drivers build config 00:02:51.843 vdpa/mlx5: not in enabled drivers build config 00:02:51.843 vdpa/nfp: not in enabled drivers build config 00:02:51.843 vdpa/sfc: not in enabled drivers build config 00:02:51.843 event/*: missing internal dependency, "eventdev" 00:02:51.843 baseband/*: missing internal dependency, "bbdev" 00:02:51.843 gpu/*: missing internal dependency, "gpudev" 00:02:51.843 00:02:51.843 00:02:51.843 Build targets in project: 85 00:02:51.843 00:02:51.843 DPDK 24.03.0 00:02:51.843 00:02:51.843 User defined options 00:02:51.843 buildtype : debug 00:02:51.843 default_library : shared 00:02:51.843 libdir : lib 00:02:51.843 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:51.843 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:51.843 c_link_args : 00:02:51.843 cpu_instruction_set: native 00:02:51.843 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:51.843 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:51.843 enable_docs : false 00:02:51.843 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:51.843 enable_kmods : false 00:02:51.843 max_lcores : 128 00:02:51.843 tests : false 00:02:51.843 00:02:51.843 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:51.843 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:51.843 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:51.843 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:51.843 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:51.843 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:51.843 [5/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:51.843 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:51.843 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:51.843 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:51.843 [9/268] Linking static target lib/librte_kvargs.a 00:02:51.843 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:51.843 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:51.843 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:51.843 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:51.843 [14/268] Linking static target lib/librte_log.a 00:02:51.843 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:51.843 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:52.781 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.781 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:52.781 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:52.781 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:52.781 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:52.781 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:52.781 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:52.781 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:52.781 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:52.781 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:52.781 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:52.781 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:52.781 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:52.781 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:52.781 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:52.781 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:52.781 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:52.781 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:52.781 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:52.781 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:52.781 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:52.781 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:52.781 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:52.781 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:52.781 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:52.781 [42/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:52.781 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:52.781 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:52.781 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:52.781 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:52.781 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:52.781 [48/268] Linking static target lib/librte_telemetry.a 00:02:52.781 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:52.781 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:52.781 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:52.781 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:52.781 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:52.781 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:52.781 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:52.781 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:52.781 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:53.043 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:53.043 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:53.043 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.043 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:53.043 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:53.043 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:53.043 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.043 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:53.305 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:53.305 [67/268] Linking target lib/librte_log.so.24.1 00:02:53.305 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:53.305 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:53.305 [70/268] Linking static target lib/librte_pci.a 00:02:53.567 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:53.567 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:53.567 [73/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:53.567 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:53.567 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:53.567 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:53.567 [77/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:53.567 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:53.567 [79/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:53.567 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:53.567 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:53.567 [82/268] Linking target lib/librte_kvargs.so.24.1 00:02:53.567 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:53.830 [84/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:53.830 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:53.830 [86/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:53.830 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:53.830 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:53.830 [89/268] Linking static target lib/librte_ring.a 00:02:53.830 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:53.830 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:53.830 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:53.830 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:53.830 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:53.830 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:53.830 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:53.830 [97/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:53.830 [98/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.830 [99/268] Linking static target lib/librte_meter.a 00:02:53.830 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:53.830 [101/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.830 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:53.830 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:53.830 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:53.830 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:53.830 [106/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:53.830 [107/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:53.830 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:53.830 [109/268] Linking target lib/librte_telemetry.so.24.1 00:02:53.830 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:53.830 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:53.830 [112/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:54.088 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:54.088 [114/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:54.088 [115/268] Linking static target lib/librte_eal.a 00:02:54.088 [116/268] Linking static target lib/librte_rcu.a 00:02:54.088 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:54.088 [118/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:54.088 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:54.088 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:54.088 [121/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:54.088 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:54.088 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:54.088 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:54.088 [125/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:54.088 [126/268] Linking static target lib/librte_mempool.a 00:02:54.088 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:54.088 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:54.088 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:54.349 [130/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:54.349 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:54.349 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:54.349 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:54.349 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:54.349 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.349 [136/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:54.349 [137/268] Linking static target lib/librte_net.a 00:02:54.609 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:54.609 [139/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.609 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:54.609 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:54.609 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:54.609 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:54.609 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:54.609 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:54.609 [146/268] Linking static target lib/librte_cmdline.a 00:02:54.609 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:54.609 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:54.609 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:54.609 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.868 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:54.868 [152/268] Linking static target lib/librte_timer.a 00:02:54.868 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:54.868 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:54.868 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:54.868 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:54.868 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:54.868 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:54.868 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:54.868 [160/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.868 [161/268] Linking static target lib/librte_dmadev.a 00:02:54.868 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:55.125 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:55.125 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:55.125 [165/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:55.125 [166/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:55.125 [167/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:55.125 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:55.125 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:55.125 [170/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.125 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:55.125 [172/268] Linking static target lib/librte_power.a 00:02:55.125 [173/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.125 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.383 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:55.383 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:55.383 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:55.383 [178/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:55.383 [179/268] Linking static target lib/librte_hash.a 00:02:55.383 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:55.383 [181/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:55.383 [182/268] Linking static target lib/librte_compressdev.a 00:02:55.383 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:55.383 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:55.383 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:55.383 [186/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.383 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:55.383 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:55.383 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:55.383 [190/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:55.641 [191/268] Linking static target lib/librte_reorder.a 00:02:55.641 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:55.641 [193/268] Linking static target lib/librte_mbuf.a 00:02:55.641 [194/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:55.641 [195/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:55.641 [196/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.641 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:55.641 [198/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:55.641 [199/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:55.641 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:55.641 [201/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:55.641 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.641 [203/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.641 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.641 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.641 [206/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.641 [207/268] Linking static target drivers/librte_bus_vdev.a 00:02:55.641 [208/268] Linking static target drivers/librte_bus_pci.a 00:02:55.898 [209/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:55.898 [210/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.898 [211/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.898 [212/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:55.898 [213/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:55.898 [214/268] Linking static target lib/librte_security.a 00:02:55.898 [215/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.898 [216/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.898 [217/268] Linking static target drivers/librte_mempool_ring.a 00:02:55.898 [218/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.898 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.898 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.154 [221/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:56.154 [222/268] Linking static target lib/librte_cryptodev.a 00:02:56.154 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.154 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:56.154 [225/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.154 [226/268] Linking static target lib/librte_ethdev.a 00:02:57.087 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.457 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:00.357 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.357 [230/268] Linking target lib/librte_eal.so.24.1 00:03:00.357 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.357 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:00.357 [233/268] Linking target lib/librte_meter.so.24.1 00:03:00.357 [234/268] Linking target lib/librte_pci.so.24.1 00:03:00.357 [235/268] Linking target lib/librte_dmadev.so.24.1 00:03:00.357 [236/268] Linking target lib/librte_ring.so.24.1 00:03:00.357 [237/268] Linking target lib/librte_timer.so.24.1 00:03:00.357 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:00.615 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:00.615 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:00.615 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:00.615 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:00.615 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:00.615 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:00.615 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:00.615 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:00.873 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:00.873 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:00.873 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:00.873 [250/268] Linking target lib/librte_mbuf.so.24.1 00:03:00.873 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:00.873 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:00.873 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:00.873 [254/268] Linking target lib/librte_net.so.24.1 00:03:00.873 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:03:01.131 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:01.131 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:01.131 [258/268] Linking target lib/librte_security.so.24.1 00:03:01.131 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:01.131 [260/268] Linking target lib/librte_hash.so.24.1 00:03:01.131 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:01.131 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:01.389 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:01.389 [264/268] Linking target lib/librte_power.so.24.1 00:03:03.917 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:03.917 [266/268] Linking static target lib/librte_vhost.a 00:03:04.852 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.852 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:04.852 INFO: autodetecting backend as ninja 00:03:04.852 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:03:05.786 CC lib/log/log.o 00:03:05.786 CC lib/log/log_flags.o 00:03:05.786 CC lib/log/log_deprecated.o 00:03:05.786 CC lib/ut_mock/mock.o 00:03:05.786 CC lib/ut/ut.o 00:03:05.786 LIB libspdk_log.a 00:03:05.786 LIB libspdk_ut.a 00:03:05.786 LIB libspdk_ut_mock.a 00:03:05.786 SO libspdk_ut_mock.so.6.0 00:03:05.786 SO libspdk_ut.so.2.0 00:03:05.786 SO libspdk_log.so.7.0 00:03:06.043 SYMLINK libspdk_ut_mock.so 00:03:06.043 SYMLINK libspdk_ut.so 00:03:06.043 SYMLINK libspdk_log.so 00:03:06.043 CC lib/dma/dma.o 00:03:06.043 CC lib/ioat/ioat.o 00:03:06.043 CXX lib/trace_parser/trace.o 00:03:06.043 CC lib/util/base64.o 00:03:06.043 CC lib/util/bit_array.o 00:03:06.043 CC lib/util/cpuset.o 00:03:06.043 CC lib/util/crc16.o 00:03:06.043 CC lib/util/crc32.o 00:03:06.043 CC lib/util/crc32c.o 00:03:06.043 CC lib/util/crc32_ieee.o 00:03:06.043 CC lib/util/crc64.o 00:03:06.043 CC lib/util/dif.o 00:03:06.043 CC lib/util/fd.o 00:03:06.043 CC lib/util/fd_group.o 00:03:06.043 CC lib/util/file.o 00:03:06.043 CC lib/util/hexlify.o 00:03:06.043 CC lib/util/iov.o 00:03:06.043 CC lib/util/math.o 00:03:06.043 CC lib/util/net.o 00:03:06.043 CC lib/util/pipe.o 00:03:06.043 CC lib/util/strerror_tls.o 00:03:06.043 CC lib/util/string.o 00:03:06.043 CC lib/util/uuid.o 00:03:06.043 CC lib/util/zipf.o 00:03:06.043 CC lib/util/xor.o 00:03:06.300 CC lib/vfio_user/host/vfio_user_pci.o 00:03:06.300 CC lib/vfio_user/host/vfio_user.o 00:03:06.300 LIB libspdk_dma.a 00:03:06.300 SO libspdk_dma.so.4.0 00:03:06.558 LIB libspdk_ioat.a 00:03:06.558 SYMLINK libspdk_dma.so 00:03:06.558 SO libspdk_ioat.so.7.0 00:03:06.558 SYMLINK libspdk_ioat.so 00:03:06.558 LIB libspdk_vfio_user.a 00:03:06.558 SO libspdk_vfio_user.so.5.0 00:03:06.558 SYMLINK libspdk_vfio_user.so 00:03:06.558 LIB libspdk_util.a 00:03:06.814 SO libspdk_util.so.9.1 00:03:06.814 SYMLINK libspdk_util.so 00:03:07.071 CC lib/rdma_provider/common.o 00:03:07.071 CC lib/vmd/vmd.o 00:03:07.071 CC lib/rdma_utils/rdma_utils.o 00:03:07.071 CC lib/json/json_parse.o 00:03:07.071 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:07.071 CC lib/vmd/led.o 00:03:07.071 CC lib/json/json_util.o 00:03:07.071 CC lib/idxd/idxd.o 00:03:07.071 CC lib/idxd/idxd_user.o 00:03:07.071 CC lib/json/json_write.o 00:03:07.071 CC lib/conf/conf.o 00:03:07.071 CC lib/env_dpdk/env.o 00:03:07.071 CC lib/idxd/idxd_kernel.o 00:03:07.071 CC lib/env_dpdk/memory.o 00:03:07.071 CC lib/env_dpdk/pci.o 00:03:07.071 CC lib/env_dpdk/init.o 00:03:07.071 CC lib/env_dpdk/threads.o 00:03:07.071 CC lib/env_dpdk/pci_ioat.o 00:03:07.071 CC lib/env_dpdk/pci_virtio.o 00:03:07.071 CC lib/env_dpdk/pci_vmd.o 00:03:07.071 CC lib/env_dpdk/pci_idxd.o 00:03:07.071 CC lib/env_dpdk/pci_event.o 00:03:07.071 CC lib/env_dpdk/sigbus_handler.o 00:03:07.071 CC lib/env_dpdk/pci_dpdk.o 00:03:07.071 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:07.071 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:07.071 LIB libspdk_trace_parser.a 00:03:07.071 SO libspdk_trace_parser.so.5.0 00:03:07.328 LIB libspdk_conf.a 00:03:07.328 SYMLINK libspdk_trace_parser.so 00:03:07.328 SO libspdk_conf.so.6.0 00:03:07.328 LIB libspdk_rdma_provider.a 00:03:07.328 LIB libspdk_rdma_utils.a 00:03:07.328 LIB libspdk_json.a 00:03:07.328 SYMLINK libspdk_conf.so 00:03:07.328 SO libspdk_rdma_provider.so.6.0 00:03:07.328 SO libspdk_rdma_utils.so.1.0 00:03:07.328 SO libspdk_json.so.6.0 00:03:07.328 SYMLINK libspdk_rdma_provider.so 00:03:07.328 SYMLINK libspdk_rdma_utils.so 00:03:07.586 SYMLINK libspdk_json.so 00:03:07.586 CC lib/jsonrpc/jsonrpc_server.o 00:03:07.586 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:07.586 CC lib/jsonrpc/jsonrpc_client.o 00:03:07.586 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:07.586 LIB libspdk_idxd.a 00:03:07.586 SO libspdk_idxd.so.12.0 00:03:07.843 SYMLINK libspdk_idxd.so 00:03:07.843 LIB libspdk_vmd.a 00:03:07.843 SO libspdk_vmd.so.6.0 00:03:07.843 SYMLINK libspdk_vmd.so 00:03:07.843 LIB libspdk_jsonrpc.a 00:03:07.843 SO libspdk_jsonrpc.so.6.0 00:03:08.101 SYMLINK libspdk_jsonrpc.so 00:03:08.101 CC lib/rpc/rpc.o 00:03:08.360 LIB libspdk_rpc.a 00:03:08.360 SO libspdk_rpc.so.6.0 00:03:08.360 SYMLINK libspdk_rpc.so 00:03:08.638 CC lib/notify/notify.o 00:03:08.638 CC lib/notify/notify_rpc.o 00:03:08.638 CC lib/keyring/keyring.o 00:03:08.638 CC lib/keyring/keyring_rpc.o 00:03:08.638 CC lib/trace/trace.o 00:03:08.638 CC lib/trace/trace_flags.o 00:03:08.638 CC lib/trace/trace_rpc.o 00:03:08.897 LIB libspdk_notify.a 00:03:08.897 SO libspdk_notify.so.6.0 00:03:08.897 LIB libspdk_keyring.a 00:03:08.897 SYMLINK libspdk_notify.so 00:03:08.897 LIB libspdk_trace.a 00:03:08.897 SO libspdk_keyring.so.1.0 00:03:08.897 SO libspdk_trace.so.10.0 00:03:08.897 SYMLINK libspdk_keyring.so 00:03:08.897 SYMLINK libspdk_trace.so 00:03:09.156 LIB libspdk_env_dpdk.a 00:03:09.156 CC lib/sock/sock.o 00:03:09.156 CC lib/sock/sock_rpc.o 00:03:09.156 CC lib/thread/thread.o 00:03:09.156 CC lib/thread/iobuf.o 00:03:09.156 SO libspdk_env_dpdk.so.15.0 00:03:09.414 SYMLINK libspdk_env_dpdk.so 00:03:09.414 LIB libspdk_sock.a 00:03:09.414 SO libspdk_sock.so.10.0 00:03:09.672 SYMLINK libspdk_sock.so 00:03:09.672 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:09.672 CC lib/nvme/nvme_ctrlr.o 00:03:09.672 CC lib/nvme/nvme_fabric.o 00:03:09.672 CC lib/nvme/nvme_ns_cmd.o 00:03:09.672 CC lib/nvme/nvme_ns.o 00:03:09.672 CC lib/nvme/nvme_pcie_common.o 00:03:09.672 CC lib/nvme/nvme_pcie.o 00:03:09.672 CC lib/nvme/nvme_qpair.o 00:03:09.672 CC lib/nvme/nvme.o 00:03:09.672 CC lib/nvme/nvme_quirks.o 00:03:09.672 CC lib/nvme/nvme_transport.o 00:03:09.672 CC lib/nvme/nvme_discovery.o 00:03:09.672 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:09.672 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:09.672 CC lib/nvme/nvme_tcp.o 00:03:09.672 CC lib/nvme/nvme_opal.o 00:03:09.672 CC lib/nvme/nvme_io_msg.o 00:03:09.672 CC lib/nvme/nvme_poll_group.o 00:03:09.672 CC lib/nvme/nvme_zns.o 00:03:09.672 CC lib/nvme/nvme_stubs.o 00:03:09.672 CC lib/nvme/nvme_auth.o 00:03:09.672 CC lib/nvme/nvme_cuse.o 00:03:09.672 CC lib/nvme/nvme_vfio_user.o 00:03:09.672 CC lib/nvme/nvme_rdma.o 00:03:11.043 LIB libspdk_thread.a 00:03:11.043 SO libspdk_thread.so.10.1 00:03:11.043 SYMLINK libspdk_thread.so 00:03:11.043 CC lib/init/json_config.o 00:03:11.043 CC lib/virtio/virtio.o 00:03:11.043 CC lib/blob/blobstore.o 00:03:11.043 CC lib/vfu_tgt/tgt_endpoint.o 00:03:11.043 CC lib/accel/accel.o 00:03:11.043 CC lib/virtio/virtio_vhost_user.o 00:03:11.043 CC lib/init/subsystem.o 00:03:11.043 CC lib/blob/request.o 00:03:11.043 CC lib/accel/accel_rpc.o 00:03:11.043 CC lib/virtio/virtio_vfio_user.o 00:03:11.043 CC lib/vfu_tgt/tgt_rpc.o 00:03:11.043 CC lib/blob/zeroes.o 00:03:11.043 CC lib/init/subsystem_rpc.o 00:03:11.043 CC lib/accel/accel_sw.o 00:03:11.043 CC lib/virtio/virtio_pci.o 00:03:11.043 CC lib/blob/blob_bs_dev.o 00:03:11.043 CC lib/init/rpc.o 00:03:11.301 LIB libspdk_init.a 00:03:11.301 SO libspdk_init.so.5.0 00:03:11.301 LIB libspdk_vfu_tgt.a 00:03:11.301 LIB libspdk_virtio.a 00:03:11.301 SYMLINK libspdk_init.so 00:03:11.301 SO libspdk_vfu_tgt.so.3.0 00:03:11.301 SO libspdk_virtio.so.7.0 00:03:11.557 SYMLINK libspdk_vfu_tgt.so 00:03:11.557 SYMLINK libspdk_virtio.so 00:03:11.557 CC lib/event/app.o 00:03:11.557 CC lib/event/reactor.o 00:03:11.557 CC lib/event/log_rpc.o 00:03:11.557 CC lib/event/app_rpc.o 00:03:11.557 CC lib/event/scheduler_static.o 00:03:11.818 LIB libspdk_event.a 00:03:12.075 SO libspdk_event.so.14.0 00:03:12.075 LIB libspdk_accel.a 00:03:12.075 SYMLINK libspdk_event.so 00:03:12.075 SO libspdk_accel.so.15.1 00:03:12.075 SYMLINK libspdk_accel.so 00:03:12.075 LIB libspdk_nvme.a 00:03:12.332 SO libspdk_nvme.so.13.1 00:03:12.332 CC lib/bdev/bdev.o 00:03:12.332 CC lib/bdev/bdev_rpc.o 00:03:12.332 CC lib/bdev/bdev_zone.o 00:03:12.332 CC lib/bdev/part.o 00:03:12.332 CC lib/bdev/scsi_nvme.o 00:03:12.591 SYMLINK libspdk_nvme.so 00:03:13.969 LIB libspdk_blob.a 00:03:13.969 SO libspdk_blob.so.11.0 00:03:13.969 SYMLINK libspdk_blob.so 00:03:14.226 CC lib/lvol/lvol.o 00:03:14.226 CC lib/blobfs/blobfs.o 00:03:14.226 CC lib/blobfs/tree.o 00:03:14.792 LIB libspdk_bdev.a 00:03:14.792 SO libspdk_bdev.so.15.1 00:03:15.053 SYMLINK libspdk_bdev.so 00:03:15.053 LIB libspdk_blobfs.a 00:03:15.053 CC lib/nbd/nbd.o 00:03:15.053 CC lib/scsi/dev.o 00:03:15.053 CC lib/ublk/ublk.o 00:03:15.053 CC lib/nbd/nbd_rpc.o 00:03:15.053 CC lib/scsi/lun.o 00:03:15.053 CC lib/nvmf/ctrlr.o 00:03:15.054 CC lib/ublk/ublk_rpc.o 00:03:15.054 CC lib/ftl/ftl_core.o 00:03:15.054 CC lib/nvmf/ctrlr_discovery.o 00:03:15.054 CC lib/ftl/ftl_init.o 00:03:15.054 CC lib/scsi/port.o 00:03:15.054 CC lib/nvmf/ctrlr_bdev.o 00:03:15.054 CC lib/ftl/ftl_layout.o 00:03:15.054 CC lib/scsi/scsi.o 00:03:15.054 CC lib/nvmf/subsystem.o 00:03:15.054 CC lib/ftl/ftl_debug.o 00:03:15.054 CC lib/scsi/scsi_bdev.o 00:03:15.054 CC lib/nvmf/nvmf.o 00:03:15.054 CC lib/ftl/ftl_io.o 00:03:15.054 CC lib/nvmf/nvmf_rpc.o 00:03:15.054 CC lib/scsi/scsi_pr.o 00:03:15.054 CC lib/ftl/ftl_sb.o 00:03:15.054 CC lib/scsi/scsi_rpc.o 00:03:15.054 CC lib/nvmf/transport.o 00:03:15.054 CC lib/nvmf/tcp.o 00:03:15.054 CC lib/ftl/ftl_l2p.o 00:03:15.054 CC lib/scsi/task.o 00:03:15.054 CC lib/ftl/ftl_l2p_flat.o 00:03:15.054 CC lib/ftl/ftl_nv_cache.o 00:03:15.054 CC lib/nvmf/stubs.o 00:03:15.054 CC lib/ftl/ftl_band.o 00:03:15.054 CC lib/nvmf/mdns_server.o 00:03:15.054 CC lib/ftl/ftl_band_ops.o 00:03:15.054 CC lib/nvmf/vfio_user.o 00:03:15.054 CC lib/nvmf/rdma.o 00:03:15.054 CC lib/ftl/ftl_writer.o 00:03:15.054 CC lib/ftl/ftl_rq.o 00:03:15.054 CC lib/nvmf/auth.o 00:03:15.054 CC lib/ftl/ftl_reloc.o 00:03:15.054 CC lib/ftl/ftl_l2p_cache.o 00:03:15.054 CC lib/ftl/ftl_p2l.o 00:03:15.054 CC lib/ftl/mngt/ftl_mngt.o 00:03:15.054 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:15.054 SO libspdk_blobfs.so.10.0 00:03:15.054 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:15.054 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:15.054 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:15.315 SYMLINK libspdk_blobfs.so 00:03:15.316 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:15.316 LIB libspdk_lvol.a 00:03:15.316 SO libspdk_lvol.so.10.0 00:03:15.316 SYMLINK libspdk_lvol.so 00:03:15.316 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:15.577 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:15.577 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:15.577 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:15.577 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:15.577 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:15.577 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:15.577 CC lib/ftl/utils/ftl_conf.o 00:03:15.577 CC lib/ftl/utils/ftl_md.o 00:03:15.577 CC lib/ftl/utils/ftl_mempool.o 00:03:15.577 CC lib/ftl/utils/ftl_bitmap.o 00:03:15.577 CC lib/ftl/utils/ftl_property.o 00:03:15.577 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:15.577 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:15.577 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:15.577 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:15.577 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:15.577 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:15.577 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:15.577 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:15.837 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:15.837 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:15.837 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:15.837 CC lib/ftl/base/ftl_base_dev.o 00:03:15.837 CC lib/ftl/base/ftl_base_bdev.o 00:03:15.837 CC lib/ftl/ftl_trace.o 00:03:15.837 LIB libspdk_nbd.a 00:03:15.837 SO libspdk_nbd.so.7.0 00:03:16.094 SYMLINK libspdk_nbd.so 00:03:16.094 LIB libspdk_scsi.a 00:03:16.094 SO libspdk_scsi.so.9.0 00:03:16.094 LIB libspdk_ublk.a 00:03:16.351 SYMLINK libspdk_scsi.so 00:03:16.351 SO libspdk_ublk.so.3.0 00:03:16.351 SYMLINK libspdk_ublk.so 00:03:16.351 CC lib/vhost/vhost.o 00:03:16.351 CC lib/iscsi/conn.o 00:03:16.351 CC lib/vhost/vhost_rpc.o 00:03:16.351 CC lib/vhost/vhost_scsi.o 00:03:16.351 CC lib/iscsi/init_grp.o 00:03:16.351 CC lib/vhost/vhost_blk.o 00:03:16.351 CC lib/iscsi/iscsi.o 00:03:16.351 CC lib/vhost/rte_vhost_user.o 00:03:16.351 CC lib/iscsi/md5.o 00:03:16.351 CC lib/iscsi/param.o 00:03:16.351 CC lib/iscsi/portal_grp.o 00:03:16.351 CC lib/iscsi/tgt_node.o 00:03:16.351 CC lib/iscsi/iscsi_subsystem.o 00:03:16.351 CC lib/iscsi/iscsi_rpc.o 00:03:16.351 CC lib/iscsi/task.o 00:03:16.609 LIB libspdk_ftl.a 00:03:16.609 SO libspdk_ftl.so.9.0 00:03:17.174 SYMLINK libspdk_ftl.so 00:03:17.740 LIB libspdk_vhost.a 00:03:17.740 LIB libspdk_nvmf.a 00:03:17.740 SO libspdk_vhost.so.8.0 00:03:17.740 SO libspdk_nvmf.so.18.1 00:03:17.740 SYMLINK libspdk_vhost.so 00:03:17.740 LIB libspdk_iscsi.a 00:03:17.998 SO libspdk_iscsi.so.8.0 00:03:17.998 SYMLINK libspdk_nvmf.so 00:03:17.998 SYMLINK libspdk_iscsi.so 00:03:18.256 CC module/vfu_device/vfu_virtio.o 00:03:18.256 CC module/vfu_device/vfu_virtio_blk.o 00:03:18.256 CC module/vfu_device/vfu_virtio_scsi.o 00:03:18.256 CC module/env_dpdk/env_dpdk_rpc.o 00:03:18.256 CC module/vfu_device/vfu_virtio_rpc.o 00:03:18.256 CC module/accel/error/accel_error.o 00:03:18.256 CC module/accel/error/accel_error_rpc.o 00:03:18.256 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:18.256 CC module/blob/bdev/blob_bdev.o 00:03:18.256 CC module/accel/ioat/accel_ioat.o 00:03:18.256 CC module/accel/dsa/accel_dsa.o 00:03:18.256 CC module/accel/ioat/accel_ioat_rpc.o 00:03:18.256 CC module/accel/iaa/accel_iaa.o 00:03:18.256 CC module/sock/posix/posix.o 00:03:18.256 CC module/accel/dsa/accel_dsa_rpc.o 00:03:18.256 CC module/accel/iaa/accel_iaa_rpc.o 00:03:18.256 CC module/keyring/linux/keyring.o 00:03:18.256 CC module/keyring/file/keyring.o 00:03:18.256 CC module/keyring/linux/keyring_rpc.o 00:03:18.256 CC module/keyring/file/keyring_rpc.o 00:03:18.256 CC module/scheduler/gscheduler/gscheduler.o 00:03:18.256 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:18.514 LIB libspdk_env_dpdk_rpc.a 00:03:18.514 SO libspdk_env_dpdk_rpc.so.6.0 00:03:18.514 SYMLINK libspdk_env_dpdk_rpc.so 00:03:18.514 LIB libspdk_keyring_file.a 00:03:18.514 LIB libspdk_keyring_linux.a 00:03:18.514 LIB libspdk_scheduler_gscheduler.a 00:03:18.514 LIB libspdk_scheduler_dpdk_governor.a 00:03:18.514 SO libspdk_keyring_file.so.1.0 00:03:18.514 SO libspdk_keyring_linux.so.1.0 00:03:18.514 SO libspdk_scheduler_gscheduler.so.4.0 00:03:18.514 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:18.514 LIB libspdk_accel_error.a 00:03:18.514 LIB libspdk_accel_ioat.a 00:03:18.514 LIB libspdk_scheduler_dynamic.a 00:03:18.514 LIB libspdk_accel_iaa.a 00:03:18.514 SO libspdk_accel_error.so.2.0 00:03:18.514 SO libspdk_accel_ioat.so.6.0 00:03:18.514 SYMLINK libspdk_keyring_file.so 00:03:18.514 SYMLINK libspdk_scheduler_gscheduler.so 00:03:18.514 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:18.514 SYMLINK libspdk_keyring_linux.so 00:03:18.514 SO libspdk_scheduler_dynamic.so.4.0 00:03:18.514 SO libspdk_accel_iaa.so.3.0 00:03:18.773 LIB libspdk_accel_dsa.a 00:03:18.773 SYMLINK libspdk_accel_error.so 00:03:18.773 LIB libspdk_blob_bdev.a 00:03:18.773 SYMLINK libspdk_accel_ioat.so 00:03:18.773 SYMLINK libspdk_scheduler_dynamic.so 00:03:18.773 SO libspdk_accel_dsa.so.5.0 00:03:18.773 SYMLINK libspdk_accel_iaa.so 00:03:18.773 SO libspdk_blob_bdev.so.11.0 00:03:18.773 SYMLINK libspdk_blob_bdev.so 00:03:18.773 SYMLINK libspdk_accel_dsa.so 00:03:19.035 LIB libspdk_vfu_device.a 00:03:19.035 SO libspdk_vfu_device.so.3.0 00:03:19.035 CC module/bdev/gpt/gpt.o 00:03:19.035 CC module/blobfs/bdev/blobfs_bdev.o 00:03:19.035 CC module/bdev/gpt/vbdev_gpt.o 00:03:19.035 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:19.035 CC module/bdev/lvol/vbdev_lvol.o 00:03:19.035 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:19.035 CC module/bdev/nvme/bdev_nvme.o 00:03:19.035 CC module/bdev/null/bdev_null.o 00:03:19.035 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:19.035 CC module/bdev/error/vbdev_error.o 00:03:19.035 CC module/bdev/error/vbdev_error_rpc.o 00:03:19.035 CC module/bdev/null/bdev_null_rpc.o 00:03:19.035 CC module/bdev/passthru/vbdev_passthru.o 00:03:19.035 CC module/bdev/nvme/nvme_rpc.o 00:03:19.035 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:19.035 CC module/bdev/nvme/bdev_mdns_client.o 00:03:19.035 CC module/bdev/malloc/bdev_malloc.o 00:03:19.035 CC module/bdev/nvme/vbdev_opal.o 00:03:19.035 CC module/bdev/raid/bdev_raid.o 00:03:19.035 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:19.035 CC module/bdev/delay/vbdev_delay.o 00:03:19.035 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:19.035 CC module/bdev/raid/bdev_raid_rpc.o 00:03:19.035 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:19.035 CC module/bdev/iscsi/bdev_iscsi.o 00:03:19.035 CC module/bdev/aio/bdev_aio.o 00:03:19.035 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:19.035 CC module/bdev/aio/bdev_aio_rpc.o 00:03:19.035 CC module/bdev/raid/bdev_raid_sb.o 00:03:19.035 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:19.035 CC module/bdev/split/vbdev_split.o 00:03:19.035 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:19.035 CC module/bdev/raid/raid0.o 00:03:19.035 CC module/bdev/ftl/bdev_ftl.o 00:03:19.035 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:19.035 CC module/bdev/split/vbdev_split_rpc.o 00:03:19.035 CC module/bdev/raid/raid1.o 00:03:19.035 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:19.035 CC module/bdev/raid/concat.o 00:03:19.035 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:19.035 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:19.035 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:19.035 SYMLINK libspdk_vfu_device.so 00:03:19.293 LIB libspdk_sock_posix.a 00:03:19.293 SO libspdk_sock_posix.so.6.0 00:03:19.293 LIB libspdk_blobfs_bdev.a 00:03:19.293 SO libspdk_blobfs_bdev.so.6.0 00:03:19.551 LIB libspdk_bdev_split.a 00:03:19.551 LIB libspdk_bdev_gpt.a 00:03:19.551 LIB libspdk_bdev_error.a 00:03:19.551 SYMLINK libspdk_sock_posix.so 00:03:19.551 SYMLINK libspdk_blobfs_bdev.so 00:03:19.551 SO libspdk_bdev_split.so.6.0 00:03:19.551 SO libspdk_bdev_gpt.so.6.0 00:03:19.551 SO libspdk_bdev_error.so.6.0 00:03:19.551 LIB libspdk_bdev_null.a 00:03:19.551 SO libspdk_bdev_null.so.6.0 00:03:19.551 SYMLINK libspdk_bdev_split.so 00:03:19.551 SYMLINK libspdk_bdev_error.so 00:03:19.551 LIB libspdk_bdev_ftl.a 00:03:19.551 SYMLINK libspdk_bdev_gpt.so 00:03:19.551 LIB libspdk_bdev_passthru.a 00:03:19.551 LIB libspdk_bdev_malloc.a 00:03:19.551 SO libspdk_bdev_passthru.so.6.0 00:03:19.551 SO libspdk_bdev_ftl.so.6.0 00:03:19.551 LIB libspdk_bdev_iscsi.a 00:03:19.551 LIB libspdk_bdev_zone_block.a 00:03:19.551 SYMLINK libspdk_bdev_null.so 00:03:19.551 LIB libspdk_bdev_aio.a 00:03:19.551 SO libspdk_bdev_malloc.so.6.0 00:03:19.551 SO libspdk_bdev_zone_block.so.6.0 00:03:19.551 SO libspdk_bdev_aio.so.6.0 00:03:19.552 SO libspdk_bdev_iscsi.so.6.0 00:03:19.552 LIB libspdk_bdev_delay.a 00:03:19.552 SYMLINK libspdk_bdev_passthru.so 00:03:19.552 SYMLINK libspdk_bdev_ftl.so 00:03:19.552 SO libspdk_bdev_delay.so.6.0 00:03:19.552 SYMLINK libspdk_bdev_malloc.so 00:03:19.552 SYMLINK libspdk_bdev_aio.so 00:03:19.552 SYMLINK libspdk_bdev_iscsi.so 00:03:19.552 SYMLINK libspdk_bdev_zone_block.so 00:03:19.809 SYMLINK libspdk_bdev_delay.so 00:03:19.809 LIB libspdk_bdev_virtio.a 00:03:19.809 SO libspdk_bdev_virtio.so.6.0 00:03:19.809 LIB libspdk_bdev_lvol.a 00:03:19.809 SO libspdk_bdev_lvol.so.6.0 00:03:19.809 SYMLINK libspdk_bdev_virtio.so 00:03:19.809 SYMLINK libspdk_bdev_lvol.so 00:03:20.068 LIB libspdk_bdev_raid.a 00:03:20.068 SO libspdk_bdev_raid.so.6.0 00:03:20.326 SYMLINK libspdk_bdev_raid.so 00:03:21.262 LIB libspdk_bdev_nvme.a 00:03:21.262 SO libspdk_bdev_nvme.so.7.0 00:03:21.262 SYMLINK libspdk_bdev_nvme.so 00:03:21.829 CC module/event/subsystems/keyring/keyring.o 00:03:21.829 CC module/event/subsystems/iobuf/iobuf.o 00:03:21.829 CC module/event/subsystems/sock/sock.o 00:03:21.829 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:21.829 CC module/event/subsystems/scheduler/scheduler.o 00:03:21.829 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:21.829 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:21.829 CC module/event/subsystems/vmd/vmd.o 00:03:21.829 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:21.829 LIB libspdk_event_keyring.a 00:03:21.829 LIB libspdk_event_vhost_blk.a 00:03:21.829 LIB libspdk_event_scheduler.a 00:03:21.829 LIB libspdk_event_vfu_tgt.a 00:03:21.829 LIB libspdk_event_vmd.a 00:03:21.829 LIB libspdk_event_sock.a 00:03:21.829 SO libspdk_event_keyring.so.1.0 00:03:21.829 SO libspdk_event_vhost_blk.so.3.0 00:03:21.829 LIB libspdk_event_iobuf.a 00:03:21.829 SO libspdk_event_scheduler.so.4.0 00:03:21.829 SO libspdk_event_vfu_tgt.so.3.0 00:03:21.829 SO libspdk_event_sock.so.5.0 00:03:21.829 SO libspdk_event_vmd.so.6.0 00:03:21.829 SO libspdk_event_iobuf.so.3.0 00:03:22.087 SYMLINK libspdk_event_keyring.so 00:03:22.087 SYMLINK libspdk_event_vhost_blk.so 00:03:22.087 SYMLINK libspdk_event_scheduler.so 00:03:22.087 SYMLINK libspdk_event_vfu_tgt.so 00:03:22.087 SYMLINK libspdk_event_sock.so 00:03:22.087 SYMLINK libspdk_event_vmd.so 00:03:22.087 SYMLINK libspdk_event_iobuf.so 00:03:22.087 CC module/event/subsystems/accel/accel.o 00:03:22.345 LIB libspdk_event_accel.a 00:03:22.345 SO libspdk_event_accel.so.6.0 00:03:22.345 SYMLINK libspdk_event_accel.so 00:03:22.604 CC module/event/subsystems/bdev/bdev.o 00:03:22.862 LIB libspdk_event_bdev.a 00:03:22.862 SO libspdk_event_bdev.so.6.0 00:03:22.862 SYMLINK libspdk_event_bdev.so 00:03:23.120 CC module/event/subsystems/ublk/ublk.o 00:03:23.120 CC module/event/subsystems/nbd/nbd.o 00:03:23.120 CC module/event/subsystems/scsi/scsi.o 00:03:23.120 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:23.120 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:23.120 LIB libspdk_event_ublk.a 00:03:23.120 LIB libspdk_event_nbd.a 00:03:23.120 LIB libspdk_event_scsi.a 00:03:23.120 SO libspdk_event_nbd.so.6.0 00:03:23.120 SO libspdk_event_ublk.so.3.0 00:03:23.120 SO libspdk_event_scsi.so.6.0 00:03:23.120 SYMLINK libspdk_event_ublk.so 00:03:23.121 SYMLINK libspdk_event_nbd.so 00:03:23.121 SYMLINK libspdk_event_scsi.so 00:03:23.379 LIB libspdk_event_nvmf.a 00:03:23.379 SO libspdk_event_nvmf.so.6.0 00:03:23.379 SYMLINK libspdk_event_nvmf.so 00:03:23.379 CC module/event/subsystems/iscsi/iscsi.o 00:03:23.379 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:23.639 LIB libspdk_event_vhost_scsi.a 00:03:23.639 LIB libspdk_event_iscsi.a 00:03:23.639 SO libspdk_event_vhost_scsi.so.3.0 00:03:23.639 SO libspdk_event_iscsi.so.6.0 00:03:23.639 SYMLINK libspdk_event_vhost_scsi.so 00:03:23.639 SYMLINK libspdk_event_iscsi.so 00:03:23.901 SO libspdk.so.6.0 00:03:23.901 SYMLINK libspdk.so 00:03:23.901 CC app/trace_record/trace_record.o 00:03:23.901 CXX app/trace/trace.o 00:03:23.901 CC app/spdk_top/spdk_top.o 00:03:23.901 CC test/rpc_client/rpc_client_test.o 00:03:23.901 CC app/spdk_nvme_identify/identify.o 00:03:23.901 TEST_HEADER include/spdk/accel.h 00:03:23.901 CC app/spdk_lspci/spdk_lspci.o 00:03:23.901 TEST_HEADER include/spdk/accel_module.h 00:03:23.901 CC app/spdk_nvme_perf/perf.o 00:03:23.901 TEST_HEADER include/spdk/assert.h 00:03:23.901 TEST_HEADER include/spdk/barrier.h 00:03:23.901 TEST_HEADER include/spdk/base64.h 00:03:23.901 TEST_HEADER include/spdk/bdev.h 00:03:23.901 TEST_HEADER include/spdk/bdev_module.h 00:03:23.901 TEST_HEADER include/spdk/bdev_zone.h 00:03:23.901 TEST_HEADER include/spdk/bit_array.h 00:03:23.901 TEST_HEADER include/spdk/bit_pool.h 00:03:23.901 CC app/spdk_nvme_discover/discovery_aer.o 00:03:23.901 TEST_HEADER include/spdk/blob_bdev.h 00:03:23.901 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:23.901 TEST_HEADER include/spdk/blobfs.h 00:03:23.901 TEST_HEADER include/spdk/blob.h 00:03:23.901 TEST_HEADER include/spdk/conf.h 00:03:23.901 TEST_HEADER include/spdk/config.h 00:03:23.901 TEST_HEADER include/spdk/cpuset.h 00:03:23.901 TEST_HEADER include/spdk/crc16.h 00:03:23.901 TEST_HEADER include/spdk/crc32.h 00:03:23.901 TEST_HEADER include/spdk/crc64.h 00:03:23.901 TEST_HEADER include/spdk/dif.h 00:03:23.901 TEST_HEADER include/spdk/dma.h 00:03:23.901 TEST_HEADER include/spdk/endian.h 00:03:23.901 TEST_HEADER include/spdk/env_dpdk.h 00:03:23.901 TEST_HEADER include/spdk/env.h 00:03:23.901 TEST_HEADER include/spdk/event.h 00:03:23.901 TEST_HEADER include/spdk/fd_group.h 00:03:23.901 TEST_HEADER include/spdk/fd.h 00:03:23.901 TEST_HEADER include/spdk/file.h 00:03:23.901 TEST_HEADER include/spdk/ftl.h 00:03:23.901 TEST_HEADER include/spdk/gpt_spec.h 00:03:23.901 TEST_HEADER include/spdk/hexlify.h 00:03:23.901 TEST_HEADER include/spdk/histogram_data.h 00:03:23.901 TEST_HEADER include/spdk/idxd.h 00:03:23.901 TEST_HEADER include/spdk/idxd_spec.h 00:03:23.901 TEST_HEADER include/spdk/init.h 00:03:23.901 TEST_HEADER include/spdk/ioat.h 00:03:23.901 TEST_HEADER include/spdk/ioat_spec.h 00:03:23.901 TEST_HEADER include/spdk/iscsi_spec.h 00:03:23.901 TEST_HEADER include/spdk/json.h 00:03:23.901 TEST_HEADER include/spdk/jsonrpc.h 00:03:23.901 TEST_HEADER include/spdk/keyring.h 00:03:23.901 TEST_HEADER include/spdk/keyring_module.h 00:03:23.901 TEST_HEADER include/spdk/likely.h 00:03:23.901 TEST_HEADER include/spdk/log.h 00:03:23.901 TEST_HEADER include/spdk/lvol.h 00:03:23.901 TEST_HEADER include/spdk/memory.h 00:03:23.901 TEST_HEADER include/spdk/nbd.h 00:03:23.901 TEST_HEADER include/spdk/mmio.h 00:03:23.901 TEST_HEADER include/spdk/net.h 00:03:23.901 TEST_HEADER include/spdk/notify.h 00:03:23.901 TEST_HEADER include/spdk/nvme.h 00:03:23.901 TEST_HEADER include/spdk/nvme_intel.h 00:03:23.901 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:23.901 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:23.901 TEST_HEADER include/spdk/nvme_spec.h 00:03:23.901 TEST_HEADER include/spdk/nvme_zns.h 00:03:23.901 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:23.901 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:23.901 TEST_HEADER include/spdk/nvmf.h 00:03:23.901 TEST_HEADER include/spdk/nvmf_spec.h 00:03:23.901 TEST_HEADER include/spdk/nvmf_transport.h 00:03:23.901 TEST_HEADER include/spdk/opal.h 00:03:23.901 TEST_HEADER include/spdk/opal_spec.h 00:03:23.901 TEST_HEADER include/spdk/pci_ids.h 00:03:23.901 TEST_HEADER include/spdk/pipe.h 00:03:23.901 TEST_HEADER include/spdk/queue.h 00:03:23.901 TEST_HEADER include/spdk/reduce.h 00:03:23.901 TEST_HEADER include/spdk/rpc.h 00:03:23.901 TEST_HEADER include/spdk/scheduler.h 00:03:23.901 TEST_HEADER include/spdk/scsi.h 00:03:23.901 TEST_HEADER include/spdk/scsi_spec.h 00:03:23.901 TEST_HEADER include/spdk/sock.h 00:03:23.901 TEST_HEADER include/spdk/stdinc.h 00:03:23.901 TEST_HEADER include/spdk/string.h 00:03:23.901 TEST_HEADER include/spdk/thread.h 00:03:23.901 TEST_HEADER include/spdk/trace_parser.h 00:03:23.901 TEST_HEADER include/spdk/trace.h 00:03:23.901 TEST_HEADER include/spdk/tree.h 00:03:23.901 TEST_HEADER include/spdk/ublk.h 00:03:23.901 TEST_HEADER include/spdk/util.h 00:03:23.901 TEST_HEADER include/spdk/uuid.h 00:03:23.901 TEST_HEADER include/spdk/version.h 00:03:23.901 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:23.901 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:23.901 TEST_HEADER include/spdk/vhost.h 00:03:23.901 TEST_HEADER include/spdk/vmd.h 00:03:23.901 TEST_HEADER include/spdk/xor.h 00:03:23.901 TEST_HEADER include/spdk/zipf.h 00:03:23.901 CXX test/cpp_headers/accel.o 00:03:23.901 CXX test/cpp_headers/accel_module.o 00:03:23.901 CXX test/cpp_headers/assert.o 00:03:23.901 CXX test/cpp_headers/barrier.o 00:03:23.901 CXX test/cpp_headers/base64.o 00:03:23.901 CXX test/cpp_headers/bdev.o 00:03:23.901 CC app/spdk_dd/spdk_dd.o 00:03:23.901 CXX test/cpp_headers/bdev_module.o 00:03:23.901 CXX test/cpp_headers/bdev_zone.o 00:03:23.901 CXX test/cpp_headers/bit_array.o 00:03:23.901 CXX test/cpp_headers/bit_pool.o 00:03:24.163 CXX test/cpp_headers/blob_bdev.o 00:03:24.163 CXX test/cpp_headers/blobfs_bdev.o 00:03:24.163 CXX test/cpp_headers/blobfs.o 00:03:24.163 CXX test/cpp_headers/blob.o 00:03:24.163 CXX test/cpp_headers/conf.o 00:03:24.163 CXX test/cpp_headers/config.o 00:03:24.163 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:24.163 CXX test/cpp_headers/cpuset.o 00:03:24.163 CXX test/cpp_headers/crc16.o 00:03:24.163 CC app/nvmf_tgt/nvmf_main.o 00:03:24.163 CC app/iscsi_tgt/iscsi_tgt.o 00:03:24.163 CXX test/cpp_headers/crc32.o 00:03:24.163 CC app/spdk_tgt/spdk_tgt.o 00:03:24.163 CC test/thread/poller_perf/poller_perf.o 00:03:24.163 CC examples/util/zipf/zipf.o 00:03:24.163 CC examples/ioat/perf/perf.o 00:03:24.163 CC test/env/pci/pci_ut.o 00:03:24.163 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:24.163 CC test/app/jsoncat/jsoncat.o 00:03:24.163 CC test/app/histogram_perf/histogram_perf.o 00:03:24.163 CC app/fio/nvme/fio_plugin.o 00:03:24.163 CC examples/ioat/verify/verify.o 00:03:24.163 CC test/env/vtophys/vtophys.o 00:03:24.163 CC test/env/memory/memory_ut.o 00:03:24.163 CC test/app/stub/stub.o 00:03:24.163 CC test/dma/test_dma/test_dma.o 00:03:24.163 CC app/fio/bdev/fio_plugin.o 00:03:24.163 CC test/app/bdev_svc/bdev_svc.o 00:03:24.163 LINK spdk_lspci 00:03:24.425 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:24.425 CC test/env/mem_callbacks/mem_callbacks.o 00:03:24.425 LINK rpc_client_test 00:03:24.425 LINK spdk_nvme_discover 00:03:24.425 LINK poller_perf 00:03:24.425 LINK jsoncat 00:03:24.425 LINK histogram_perf 00:03:24.425 LINK vtophys 00:03:24.425 LINK zipf 00:03:24.425 LINK interrupt_tgt 00:03:24.425 LINK spdk_trace_record 00:03:24.425 CXX test/cpp_headers/crc64.o 00:03:24.425 CXX test/cpp_headers/dif.o 00:03:24.425 CXX test/cpp_headers/dma.o 00:03:24.425 LINK nvmf_tgt 00:03:24.425 CXX test/cpp_headers/endian.o 00:03:24.425 CXX test/cpp_headers/env_dpdk.o 00:03:24.425 CXX test/cpp_headers/env.o 00:03:24.425 LINK env_dpdk_post_init 00:03:24.425 CXX test/cpp_headers/event.o 00:03:24.425 CXX test/cpp_headers/fd_group.o 00:03:24.425 CXX test/cpp_headers/fd.o 00:03:24.425 CXX test/cpp_headers/file.o 00:03:24.425 CXX test/cpp_headers/ftl.o 00:03:24.425 LINK stub 00:03:24.425 CXX test/cpp_headers/gpt_spec.o 00:03:24.425 LINK iscsi_tgt 00:03:24.690 CXX test/cpp_headers/hexlify.o 00:03:24.690 CXX test/cpp_headers/histogram_data.o 00:03:24.690 CXX test/cpp_headers/idxd.o 00:03:24.690 LINK spdk_tgt 00:03:24.690 CXX test/cpp_headers/idxd_spec.o 00:03:24.690 LINK ioat_perf 00:03:24.690 CXX test/cpp_headers/init.o 00:03:24.690 LINK bdev_svc 00:03:24.690 LINK verify 00:03:24.690 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:24.690 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:24.690 CXX test/cpp_headers/ioat.o 00:03:24.690 CXX test/cpp_headers/ioat_spec.o 00:03:24.690 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:24.690 LINK spdk_dd 00:03:24.690 CXX test/cpp_headers/iscsi_spec.o 00:03:24.952 CXX test/cpp_headers/json.o 00:03:24.952 LINK spdk_trace 00:03:24.952 CXX test/cpp_headers/jsonrpc.o 00:03:24.952 CXX test/cpp_headers/keyring.o 00:03:24.952 CXX test/cpp_headers/keyring_module.o 00:03:24.952 CXX test/cpp_headers/likely.o 00:03:24.952 CXX test/cpp_headers/log.o 00:03:24.952 CXX test/cpp_headers/lvol.o 00:03:24.952 CXX test/cpp_headers/memory.o 00:03:24.952 CXX test/cpp_headers/mmio.o 00:03:24.952 CXX test/cpp_headers/nbd.o 00:03:24.952 CXX test/cpp_headers/net.o 00:03:24.952 LINK pci_ut 00:03:24.952 CXX test/cpp_headers/notify.o 00:03:24.952 CXX test/cpp_headers/nvme.o 00:03:24.952 CXX test/cpp_headers/nvme_intel.o 00:03:24.952 CXX test/cpp_headers/nvme_ocssd.o 00:03:24.952 LINK test_dma 00:03:24.952 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:24.952 CXX test/cpp_headers/nvme_spec.o 00:03:24.952 CXX test/cpp_headers/nvme_zns.o 00:03:24.952 CXX test/cpp_headers/nvmf_cmd.o 00:03:24.952 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:24.952 CXX test/cpp_headers/nvmf.o 00:03:24.952 CXX test/cpp_headers/nvmf_spec.o 00:03:24.952 CXX test/cpp_headers/nvmf_transport.o 00:03:25.224 CXX test/cpp_headers/opal.o 00:03:25.224 CC test/event/event_perf/event_perf.o 00:03:25.224 CC test/event/reactor/reactor.o 00:03:25.224 LINK nvme_fuzz 00:03:25.224 CC examples/sock/hello_world/hello_sock.o 00:03:25.224 CC examples/vmd/lsvmd/lsvmd.o 00:03:25.224 CC test/event/reactor_perf/reactor_perf.o 00:03:25.224 CXX test/cpp_headers/opal_spec.o 00:03:25.224 CC examples/idxd/perf/perf.o 00:03:25.224 CC examples/thread/thread/thread_ex.o 00:03:25.224 CXX test/cpp_headers/pci_ids.o 00:03:25.224 CXX test/cpp_headers/pipe.o 00:03:25.224 CC examples/vmd/led/led.o 00:03:25.225 CC test/event/app_repeat/app_repeat.o 00:03:25.225 LINK spdk_nvme 00:03:25.225 LINK spdk_bdev 00:03:25.225 CC test/event/scheduler/scheduler.o 00:03:25.225 CXX test/cpp_headers/queue.o 00:03:25.225 CXX test/cpp_headers/reduce.o 00:03:25.225 CXX test/cpp_headers/rpc.o 00:03:25.225 CXX test/cpp_headers/scheduler.o 00:03:25.225 CXX test/cpp_headers/scsi.o 00:03:25.225 CXX test/cpp_headers/scsi_spec.o 00:03:25.225 CXX test/cpp_headers/sock.o 00:03:25.225 CXX test/cpp_headers/stdinc.o 00:03:25.225 CXX test/cpp_headers/string.o 00:03:25.485 CXX test/cpp_headers/thread.o 00:03:25.485 CXX test/cpp_headers/trace.o 00:03:25.485 CXX test/cpp_headers/trace_parser.o 00:03:25.485 CXX test/cpp_headers/tree.o 00:03:25.485 CXX test/cpp_headers/ublk.o 00:03:25.485 CXX test/cpp_headers/util.o 00:03:25.485 CXX test/cpp_headers/uuid.o 00:03:25.485 LINK event_perf 00:03:25.485 CXX test/cpp_headers/version.o 00:03:25.485 LINK lsvmd 00:03:25.485 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.485 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.485 LINK reactor 00:03:25.485 CXX test/cpp_headers/vhost.o 00:03:25.485 CXX test/cpp_headers/vmd.o 00:03:25.485 CXX test/cpp_headers/xor.o 00:03:25.485 LINK reactor_perf 00:03:25.485 CXX test/cpp_headers/zipf.o 00:03:25.485 CC app/vhost/vhost.o 00:03:25.485 LINK spdk_nvme_perf 00:03:25.485 LINK spdk_nvme_identify 00:03:25.485 LINK mem_callbacks 00:03:25.485 LINK led 00:03:25.485 LINK app_repeat 00:03:25.485 LINK vhost_fuzz 00:03:25.766 LINK hello_sock 00:03:25.766 LINK spdk_top 00:03:25.766 CC test/nvme/overhead/overhead.o 00:03:25.766 CC test/nvme/aer/aer.o 00:03:25.766 CC test/nvme/sgl/sgl.o 00:03:25.766 LINK thread 00:03:25.766 LINK scheduler 00:03:25.766 CC test/nvme/reset/reset.o 00:03:25.766 CC test/nvme/startup/startup.o 00:03:25.766 CC test/nvme/e2edp/nvme_dp.o 00:03:25.766 CC test/nvme/err_injection/err_injection.o 00:03:25.766 CC test/blobfs/mkfs/mkfs.o 00:03:25.766 CC test/accel/dif/dif.o 00:03:25.766 CC test/nvme/reserve/reserve.o 00:03:25.766 CC test/nvme/simple_copy/simple_copy.o 00:03:25.766 CC test/nvme/connect_stress/connect_stress.o 00:03:25.766 CC test/nvme/boot_partition/boot_partition.o 00:03:25.766 CC test/lvol/esnap/esnap.o 00:03:25.766 CC test/nvme/compliance/nvme_compliance.o 00:03:25.766 CC test/nvme/fused_ordering/fused_ordering.o 00:03:25.766 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:25.766 LINK idxd_perf 00:03:25.766 LINK vhost 00:03:25.766 CC test/nvme/fdp/fdp.o 00:03:25.766 CC test/nvme/cuse/cuse.o 00:03:26.023 LINK startup 00:03:26.023 LINK reserve 00:03:26.023 LINK boot_partition 00:03:26.023 LINK mkfs 00:03:26.023 LINK err_injection 00:03:26.023 LINK doorbell_aers 00:03:26.023 LINK sgl 00:03:26.023 LINK nvme_dp 00:03:26.023 LINK overhead 00:03:26.023 LINK fused_ordering 00:03:26.023 LINK connect_stress 00:03:26.023 LINK aer 00:03:26.279 LINK simple_copy 00:03:26.279 CC examples/nvme/arbitration/arbitration.o 00:03:26.279 CC examples/nvme/abort/abort.o 00:03:26.279 CC examples/nvme/hotplug/hotplug.o 00:03:26.279 CC examples/nvme/hello_world/hello_world.o 00:03:26.280 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:26.280 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:26.280 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:26.280 CC examples/nvme/reconnect/reconnect.o 00:03:26.280 LINK memory_ut 00:03:26.280 LINK reset 00:03:26.280 CC examples/accel/perf/accel_perf.o 00:03:26.280 LINK fdp 00:03:26.280 CC examples/blob/cli/blobcli.o 00:03:26.280 CC examples/blob/hello_world/hello_blob.o 00:03:26.280 LINK nvme_compliance 00:03:26.280 LINK dif 00:03:26.537 LINK pmr_persistence 00:03:26.537 LINK cmb_copy 00:03:26.537 LINK hello_world 00:03:26.537 LINK hotplug 00:03:26.537 LINK reconnect 00:03:26.537 LINK arbitration 00:03:26.537 LINK abort 00:03:26.537 LINK hello_blob 00:03:26.796 LINK nvme_manage 00:03:26.796 LINK accel_perf 00:03:26.796 CC test/bdev/bdevio/bdevio.o 00:03:26.796 LINK blobcli 00:03:27.054 LINK iscsi_fuzz 00:03:27.054 CC examples/bdev/hello_world/hello_bdev.o 00:03:27.054 CC examples/bdev/bdevperf/bdevperf.o 00:03:27.312 LINK bdevio 00:03:27.312 LINK hello_bdev 00:03:27.312 LINK cuse 00:03:27.876 LINK bdevperf 00:03:28.445 CC examples/nvmf/nvmf/nvmf.o 00:03:28.445 LINK nvmf 00:03:30.978 LINK esnap 00:03:31.237 00:03:31.237 real 0m48.990s 00:03:31.237 user 10m10.973s 00:03:31.237 sys 2m29.346s 00:03:31.237 15:40:28 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:31.237 15:40:28 make -- common/autotest_common.sh@10 -- $ set +x 00:03:31.237 ************************************ 00:03:31.237 END TEST make 00:03:31.237 ************************************ 00:03:31.237 15:40:28 -- common/autotest_common.sh@1142 -- $ return 0 00:03:31.237 15:40:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:31.237 15:40:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:31.237 15:40:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:31.237 15:40:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.237 15:40:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:31.237 15:40:28 -- pm/common@44 -- $ pid=547997 00:03:31.237 15:40:28 -- pm/common@50 -- $ kill -TERM 547997 00:03:31.237 15:40:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.237 15:40:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:31.237 15:40:28 -- pm/common@44 -- $ pid=547999 00:03:31.237 15:40:28 -- pm/common@50 -- $ kill -TERM 547999 00:03:31.237 15:40:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.237 15:40:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:31.237 15:40:28 -- pm/common@44 -- $ pid=548001 00:03:31.237 15:40:28 -- pm/common@50 -- $ kill -TERM 548001 00:03:31.237 15:40:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.237 15:40:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:31.237 15:40:28 -- pm/common@44 -- $ pid=548033 00:03:31.237 15:40:28 -- pm/common@50 -- $ sudo -E kill -TERM 548033 00:03:31.237 15:40:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:31.237 15:40:28 -- nvmf/common.sh@7 -- # uname -s 00:03:31.237 15:40:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:31.237 15:40:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:31.237 15:40:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:31.237 15:40:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:31.237 15:40:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:31.237 15:40:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:31.237 15:40:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:31.237 15:40:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:31.237 15:40:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:31.237 15:40:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:31.237 15:40:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:03:31.237 15:40:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:03:31.237 15:40:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:31.237 15:40:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:31.237 15:40:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:31.237 15:40:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:31.237 15:40:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:31.237 15:40:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:31.237 15:40:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:31.237 15:40:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:31.237 15:40:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.237 15:40:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.237 15:40:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.237 15:40:28 -- paths/export.sh@5 -- # export PATH 00:03:31.238 15:40:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.238 15:40:28 -- nvmf/common.sh@47 -- # : 0 00:03:31.238 15:40:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:31.238 15:40:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:31.238 15:40:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:31.238 15:40:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:31.238 15:40:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:31.238 15:40:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:31.238 15:40:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:31.238 15:40:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:31.238 15:40:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:31.238 15:40:28 -- spdk/autotest.sh@32 -- # uname -s 00:03:31.238 15:40:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:31.238 15:40:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:31.238 15:40:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:31.238 15:40:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:31.238 15:40:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:31.238 15:40:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:31.238 15:40:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:31.238 15:40:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:31.238 15:40:28 -- spdk/autotest.sh@48 -- # udevadm_pid=603505 00:03:31.238 15:40:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:31.238 15:40:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:31.238 15:40:28 -- pm/common@17 -- # local monitor 00:03:31.238 15:40:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.238 15:40:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.238 15:40:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.238 15:40:28 -- pm/common@21 -- # date +%s 00:03:31.238 15:40:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.238 15:40:28 -- pm/common@21 -- # date +%s 00:03:31.238 15:40:28 -- pm/common@25 -- # sleep 1 00:03:31.238 15:40:28 -- pm/common@21 -- # date +%s 00:03:31.238 15:40:28 -- pm/common@21 -- # date +%s 00:03:31.238 15:40:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720791628 00:03:31.238 15:40:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720791628 00:03:31.238 15:40:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720791628 00:03:31.238 15:40:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720791628 00:03:31.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720791628_collect-vmstat.pm.log 00:03:31.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720791628_collect-cpu-load.pm.log 00:03:31.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720791628_collect-cpu-temp.pm.log 00:03:31.496 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720791628_collect-bmc-pm.bmc.pm.log 00:03:32.472 15:40:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:32.472 15:40:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:32.472 15:40:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:32.472 15:40:29 -- common/autotest_common.sh@10 -- # set +x 00:03:32.472 15:40:29 -- spdk/autotest.sh@59 -- # create_test_list 00:03:32.472 15:40:29 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:32.472 15:40:29 -- common/autotest_common.sh@10 -- # set +x 00:03:32.472 15:40:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:32.472 15:40:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:32.472 15:40:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:32.472 15:40:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:32.472 15:40:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:32.472 15:40:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:32.472 15:40:29 -- common/autotest_common.sh@1455 -- # uname 00:03:32.472 15:40:29 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:32.472 15:40:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:32.472 15:40:29 -- common/autotest_common.sh@1475 -- # uname 00:03:32.472 15:40:29 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:32.472 15:40:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:32.472 15:40:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:32.472 15:40:29 -- spdk/autotest.sh@72 -- # hash lcov 00:03:32.472 15:40:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:32.472 15:40:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:32.472 --rc lcov_branch_coverage=1 00:03:32.472 --rc lcov_function_coverage=1 00:03:32.472 --rc genhtml_branch_coverage=1 00:03:32.472 --rc genhtml_function_coverage=1 00:03:32.472 --rc genhtml_legend=1 00:03:32.472 --rc geninfo_all_blocks=1 00:03:32.472 ' 00:03:32.472 15:40:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:32.473 --rc lcov_branch_coverage=1 00:03:32.473 --rc lcov_function_coverage=1 00:03:32.473 --rc genhtml_branch_coverage=1 00:03:32.473 --rc genhtml_function_coverage=1 00:03:32.473 --rc genhtml_legend=1 00:03:32.473 --rc geninfo_all_blocks=1 00:03:32.473 ' 00:03:32.473 15:40:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:32.473 --rc lcov_branch_coverage=1 00:03:32.473 --rc lcov_function_coverage=1 00:03:32.473 --rc genhtml_branch_coverage=1 00:03:32.473 --rc genhtml_function_coverage=1 00:03:32.473 --rc genhtml_legend=1 00:03:32.473 --rc geninfo_all_blocks=1 00:03:32.473 --no-external' 00:03:32.473 15:40:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:32.473 --rc lcov_branch_coverage=1 00:03:32.473 --rc lcov_function_coverage=1 00:03:32.473 --rc genhtml_branch_coverage=1 00:03:32.473 --rc genhtml_function_coverage=1 00:03:32.473 --rc genhtml_legend=1 00:03:32.473 --rc geninfo_all_blocks=1 00:03:32.473 --no-external' 00:03:32.473 15:40:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:32.473 lcov: LCOV version 1.14 00:03:32.473 15:40:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:47.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:47.371 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:02.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:02.263 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:05.546 15:41:02 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:05.546 15:41:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:05.546 15:41:02 -- common/autotest_common.sh@10 -- # set +x 00:04:05.546 15:41:02 -- spdk/autotest.sh@91 -- # rm -f 00:04:05.546 15:41:02 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:06.966 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:04:06.966 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:06.966 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:06.966 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:06.966 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:06.966 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:06.966 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:06.966 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:06.966 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:06.966 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:06.966 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:06.966 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:06.966 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:06.966 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:06.966 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:06.966 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:06.966 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:06.966 15:41:04 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:06.966 15:41:04 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:06.966 15:41:04 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:06.966 15:41:04 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:06.966 15:41:04 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:06.966 15:41:04 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:06.966 15:41:04 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:06.966 15:41:04 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:06.966 15:41:04 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:06.966 15:41:04 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:06.966 15:41:04 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:06.966 15:41:04 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:06.966 15:41:04 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:06.966 15:41:04 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:06.966 15:41:04 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:06.966 No valid GPT data, bailing 00:04:06.966 15:41:04 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:06.966 15:41:04 -- scripts/common.sh@391 -- # pt= 00:04:06.966 15:41:04 -- scripts/common.sh@392 -- # return 1 00:04:06.966 15:41:04 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:06.966 1+0 records in 00:04:06.966 1+0 records out 00:04:06.966 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00237125 s, 442 MB/s 00:04:06.966 15:41:04 -- spdk/autotest.sh@118 -- # sync 00:04:06.966 15:41:04 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:06.966 15:41:04 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:06.966 15:41:04 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:09.503 15:41:06 -- spdk/autotest.sh@124 -- # uname -s 00:04:09.503 15:41:06 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:09.503 15:41:06 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:09.503 15:41:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.503 15:41:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.503 15:41:06 -- common/autotest_common.sh@10 -- # set +x 00:04:09.503 ************************************ 00:04:09.503 START TEST setup.sh 00:04:09.503 ************************************ 00:04:09.503 15:41:06 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:09.503 * Looking for test storage... 00:04:09.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:09.503 15:41:06 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:09.503 15:41:06 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:09.503 15:41:06 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:09.503 15:41:06 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.503 15:41:06 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.503 15:41:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:09.503 ************************************ 00:04:09.503 START TEST acl 00:04:09.503 ************************************ 00:04:09.503 15:41:06 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:09.503 * Looking for test storage... 00:04:09.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:09.503 15:41:06 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:09.503 15:41:06 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:09.503 15:41:06 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:09.503 15:41:06 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:09.503 15:41:06 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:09.503 15:41:06 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:09.503 15:41:06 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:09.503 15:41:06 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:09.503 15:41:06 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:09.503 15:41:06 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:09.503 15:41:06 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:09.503 15:41:06 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:09.503 15:41:06 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:09.503 15:41:06 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:09.503 15:41:06 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.503 15:41:06 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.879 15:41:07 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:10.879 15:41:07 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:10.879 15:41:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:10.879 15:41:07 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:10.879 15:41:07 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.879 15:41:07 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:11.817 Hugepages 00:04:11.817 node hugesize free / total 00:04:11.817 15:41:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:11.817 15:41:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:11.817 15:41:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:11.817 15:41:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:11.817 15:41:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:08 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:11.817 15:41:08 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:11.817 15:41:08 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 00:04:11.817 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:11.817 15:41:09 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.077 15:41:09 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:12.077 15:41:09 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:12.077 15:41:09 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.077 15:41:09 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.077 15:41:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:12.077 ************************************ 00:04:12.077 START TEST denied 00:04:12.077 ************************************ 00:04:12.077 15:41:09 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:12.077 15:41:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:04:12.077 15:41:09 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:12.077 15:41:09 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:04:12.077 15:41:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.077 15:41:09 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:13.453 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:04:13.453 15:41:10 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:04:13.453 15:41:10 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:13.453 15:41:10 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:13.453 15:41:10 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:04:13.453 15:41:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:04:13.453 15:41:10 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:13.453 15:41:10 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:13.453 15:41:10 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:13.453 15:41:10 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.453 15:41:10 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.990 00:04:15.990 real 0m3.970s 00:04:15.990 user 0m1.135s 00:04:15.990 sys 0m1.884s 00:04:15.990 15:41:13 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.990 15:41:13 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:15.990 ************************************ 00:04:15.990 END TEST denied 00:04:15.990 ************************************ 00:04:15.990 15:41:13 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:15.990 15:41:13 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:15.990 15:41:13 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.990 15:41:13 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.990 15:41:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:15.990 ************************************ 00:04:15.990 START TEST allowed 00:04:15.990 ************************************ 00:04:15.990 15:41:13 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:15.990 15:41:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:04:15.990 15:41:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:15.990 15:41:13 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:04:15.990 15:41:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.990 15:41:13 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:18.528 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:18.528 15:41:15 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:18.528 15:41:15 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:18.528 15:41:15 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:18.528 15:41:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.528 15:41:15 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.905 00:04:19.905 real 0m3.978s 00:04:19.905 user 0m1.050s 00:04:19.905 sys 0m1.793s 00:04:19.905 15:41:17 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.905 15:41:17 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:19.905 ************************************ 00:04:19.905 END TEST allowed 00:04:19.905 ************************************ 00:04:19.905 15:41:17 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:19.905 00:04:19.905 real 0m10.867s 00:04:19.905 user 0m3.357s 00:04:19.905 sys 0m5.513s 00:04:19.905 15:41:17 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.905 15:41:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:19.905 ************************************ 00:04:19.905 END TEST acl 00:04:19.905 ************************************ 00:04:19.905 15:41:17 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:19.905 15:41:17 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:19.905 15:41:17 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.905 15:41:17 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.905 15:41:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:20.165 ************************************ 00:04:20.165 START TEST hugepages 00:04:20.165 ************************************ 00:04:20.165 15:41:17 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:20.165 * Looking for test storage... 00:04:20.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 28177488 kB' 'MemAvailable: 31742764 kB' 'Buffers: 2704 kB' 'Cached: 9291988 kB' 'SwapCached: 0 kB' 'Active: 6286820 kB' 'Inactive: 3505240 kB' 'Active(anon): 5897280 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 500520 kB' 'Mapped: 155012 kB' 'Shmem: 5399912 kB' 'KReclaimable: 164636 kB' 'Slab: 477472 kB' 'SReclaimable: 164636 kB' 'SUnreclaim: 312836 kB' 'KernelStack: 12352 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304788 kB' 'Committed_AS: 7016084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195504 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.165 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:20.166 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.167 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:20.167 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:20.167 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.167 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:20.167 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.167 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:20.167 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:20.167 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:20.167 15:41:17 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:20.167 15:41:17 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.167 15:41:17 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.167 15:41:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:20.167 ************************************ 00:04:20.167 START TEST default_setup 00:04:20.167 ************************************ 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.167 15:41:17 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:21.546 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:21.546 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:21.546 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:21.546 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:21.546 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:21.546 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:21.546 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:21.546 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:21.546 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:21.546 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:21.546 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:21.546 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:21.546 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:21.546 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:21.546 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:21.546 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:22.483 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30247036 kB' 'MemAvailable: 33812312 kB' 'Buffers: 2704 kB' 'Cached: 9292084 kB' 'SwapCached: 0 kB' 'Active: 6306716 kB' 'Inactive: 3505240 kB' 'Active(anon): 5917176 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520360 kB' 'Mapped: 155080 kB' 'Shmem: 5400008 kB' 'KReclaimable: 164640 kB' 'Slab: 477092 kB' 'SReclaimable: 164640 kB' 'SUnreclaim: 312452 kB' 'KernelStack: 12640 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7036728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.748 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30259428 kB' 'MemAvailable: 33824704 kB' 'Buffers: 2704 kB' 'Cached: 9292084 kB' 'SwapCached: 0 kB' 'Active: 6305748 kB' 'Inactive: 3505240 kB' 'Active(anon): 5916208 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519440 kB' 'Mapped: 155032 kB' 'Shmem: 5400008 kB' 'KReclaimable: 164640 kB' 'Slab: 477092 kB' 'SReclaimable: 164640 kB' 'SUnreclaim: 312452 kB' 'KernelStack: 12160 kB' 'PageTables: 7348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7042108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195504 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.749 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.750 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30258016 kB' 'MemAvailable: 33823292 kB' 'Buffers: 2704 kB' 'Cached: 9292104 kB' 'SwapCached: 0 kB' 'Active: 6305336 kB' 'Inactive: 3505240 kB' 'Active(anon): 5915796 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519036 kB' 'Mapped: 155032 kB' 'Shmem: 5400028 kB' 'KReclaimable: 164640 kB' 'Slab: 477072 kB' 'SReclaimable: 164640 kB' 'SUnreclaim: 312432 kB' 'KernelStack: 12240 kB' 'PageTables: 7332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7036400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195472 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.751 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.752 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:22.753 nr_hugepages=1024 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.753 resv_hugepages=0 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.753 surplus_hugepages=0 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.753 anon_hugepages=0 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30258612 kB' 'MemAvailable: 33823888 kB' 'Buffers: 2704 kB' 'Cached: 9292128 kB' 'SwapCached: 0 kB' 'Active: 6305020 kB' 'Inactive: 3505240 kB' 'Active(anon): 5915480 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518736 kB' 'Mapped: 155032 kB' 'Shmem: 5400052 kB' 'KReclaimable: 164640 kB' 'Slab: 477164 kB' 'SReclaimable: 164640 kB' 'SUnreclaim: 312524 kB' 'KernelStack: 12256 kB' 'PageTables: 7404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7036424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195472 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.753 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.754 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20259164 kB' 'MemUsed: 4313192 kB' 'SwapCached: 0 kB' 'Active: 1576680 kB' 'Inactive: 72212 kB' 'Active(anon): 1447412 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1336436 kB' 'Mapped: 75680 kB' 'AnonPages: 315700 kB' 'Shmem: 1134956 kB' 'KernelStack: 6808 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 48868 kB' 'Slab: 192636 kB' 'SReclaimable: 48868 kB' 'SUnreclaim: 143768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.755 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:22.756 node0=1024 expecting 1024 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:22.756 00:04:22.756 real 0m2.577s 00:04:22.756 user 0m0.690s 00:04:22.756 sys 0m0.985s 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.756 15:41:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:22.756 ************************************ 00:04:22.756 END TEST default_setup 00:04:22.756 ************************************ 00:04:22.756 15:41:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:22.756 15:41:19 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:22.756 15:41:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.756 15:41:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.756 15:41:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:22.756 ************************************ 00:04:22.756 START TEST per_node_1G_alloc 00:04:22.756 ************************************ 00:04:22.756 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:22.756 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:22.756 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:22.756 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:22.756 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:22.756 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:22.756 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.757 15:41:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.205 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.205 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.205 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.205 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.205 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.205 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.205 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.205 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.205 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.205 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.205 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.205 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.205 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.205 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.205 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.205 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.205 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30230348 kB' 'MemAvailable: 33795636 kB' 'Buffers: 2704 kB' 'Cached: 9292208 kB' 'SwapCached: 0 kB' 'Active: 6310880 kB' 'Inactive: 3505240 kB' 'Active(anon): 5921340 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524504 kB' 'Mapped: 155956 kB' 'Shmem: 5400132 kB' 'KReclaimable: 164664 kB' 'Slab: 477392 kB' 'SReclaimable: 164664 kB' 'SUnreclaim: 312728 kB' 'KernelStack: 12240 kB' 'PageTables: 7404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7043248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195476 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.205 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.206 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30230900 kB' 'MemAvailable: 33796188 kB' 'Buffers: 2704 kB' 'Cached: 9292212 kB' 'SwapCached: 0 kB' 'Active: 6311492 kB' 'Inactive: 3505240 kB' 'Active(anon): 5921952 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525076 kB' 'Mapped: 156052 kB' 'Shmem: 5400136 kB' 'KReclaimable: 164664 kB' 'Slab: 477380 kB' 'SReclaimable: 164664 kB' 'SUnreclaim: 312716 kB' 'KernelStack: 12288 kB' 'PageTables: 7528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7043268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195472 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.207 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.208 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30237164 kB' 'MemAvailable: 33802448 kB' 'Buffers: 2704 kB' 'Cached: 9292228 kB' 'SwapCached: 0 kB' 'Active: 6305876 kB' 'Inactive: 3505240 kB' 'Active(anon): 5916336 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519424 kB' 'Mapped: 155044 kB' 'Shmem: 5400152 kB' 'KReclaimable: 164656 kB' 'Slab: 477380 kB' 'SReclaimable: 164656 kB' 'SUnreclaim: 312724 kB' 'KernelStack: 12288 kB' 'PageTables: 7456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7037172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195488 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.209 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.210 nr_hugepages=1024 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.210 resv_hugepages=0 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.210 surplus_hugepages=0 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.210 anon_hugepages=0 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.210 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30238028 kB' 'MemAvailable: 33803312 kB' 'Buffers: 2704 kB' 'Cached: 9292252 kB' 'SwapCached: 0 kB' 'Active: 6305756 kB' 'Inactive: 3505240 kB' 'Active(anon): 5916216 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519276 kB' 'Mapped: 155044 kB' 'Shmem: 5400176 kB' 'KReclaimable: 164656 kB' 'Slab: 477384 kB' 'SReclaimable: 164656 kB' 'SUnreclaim: 312728 kB' 'KernelStack: 12304 kB' 'PageTables: 7452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7037192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195504 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.211 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21290152 kB' 'MemUsed: 3282204 kB' 'SwapCached: 0 kB' 'Active: 1576640 kB' 'Inactive: 72212 kB' 'Active(anon): 1447372 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1336504 kB' 'Mapped: 75692 kB' 'AnonPages: 315488 kB' 'Shmem: 1135024 kB' 'KernelStack: 6808 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 48884 kB' 'Slab: 192776 kB' 'SReclaimable: 48884 kB' 'SUnreclaim: 143892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.212 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.213 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.475 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 8947876 kB' 'MemUsed: 10506440 kB' 'SwapCached: 0 kB' 'Active: 4729424 kB' 'Inactive: 3433028 kB' 'Active(anon): 4469152 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3433028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7958492 kB' 'Mapped: 79352 kB' 'AnonPages: 204028 kB' 'Shmem: 4265192 kB' 'KernelStack: 5464 kB' 'PageTables: 3300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115772 kB' 'Slab: 284604 kB' 'SReclaimable: 115772 kB' 'SUnreclaim: 168832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.476 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:24.477 node0=512 expecting 512 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:24.477 node1=512 expecting 512 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:24.477 00:04:24.477 real 0m1.552s 00:04:24.477 user 0m0.675s 00:04:24.477 sys 0m0.851s 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.477 15:41:21 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.477 ************************************ 00:04:24.477 END TEST per_node_1G_alloc 00:04:24.477 ************************************ 00:04:24.477 15:41:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:24.477 15:41:21 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:24.477 15:41:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:24.477 15:41:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.477 15:41:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.477 ************************************ 00:04:24.477 START TEST even_2G_alloc 00:04:24.477 ************************************ 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.477 15:41:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.860 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:25.860 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.860 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:25.860 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:25.860 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:25.860 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:25.860 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:25.860 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:25.860 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:25.860 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:25.860 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:25.860 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:25.860 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:25.860 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:25.860 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:25.860 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:25.860 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30263228 kB' 'MemAvailable: 33828512 kB' 'Buffers: 2704 kB' 'Cached: 9292520 kB' 'SwapCached: 0 kB' 'Active: 6306444 kB' 'Inactive: 3505240 kB' 'Active(anon): 5916904 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519736 kB' 'Mapped: 155248 kB' 'Shmem: 5400444 kB' 'KReclaimable: 164656 kB' 'Slab: 477420 kB' 'SReclaimable: 164656 kB' 'SUnreclaim: 312764 kB' 'KernelStack: 12272 kB' 'PageTables: 7340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7037588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195568 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.860 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.861 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30263692 kB' 'MemAvailable: 33828976 kB' 'Buffers: 2704 kB' 'Cached: 9292524 kB' 'SwapCached: 0 kB' 'Active: 6306240 kB' 'Inactive: 3505240 kB' 'Active(anon): 5916700 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519524 kB' 'Mapped: 155136 kB' 'Shmem: 5400448 kB' 'KReclaimable: 164656 kB' 'Slab: 477420 kB' 'SReclaimable: 164656 kB' 'SUnreclaim: 312764 kB' 'KernelStack: 12304 kB' 'PageTables: 7420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7037604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195552 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.862 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.863 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30264192 kB' 'MemAvailable: 33829476 kB' 'Buffers: 2704 kB' 'Cached: 9292540 kB' 'SwapCached: 0 kB' 'Active: 6306260 kB' 'Inactive: 3505240 kB' 'Active(anon): 5916720 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519524 kB' 'Mapped: 155136 kB' 'Shmem: 5400464 kB' 'KReclaimable: 164656 kB' 'Slab: 477420 kB' 'SReclaimable: 164656 kB' 'SUnreclaim: 312764 kB' 'KernelStack: 12320 kB' 'PageTables: 7420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7037624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195552 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.864 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.865 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.866 nr_hugepages=1024 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.866 resv_hugepages=0 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.866 surplus_hugepages=0 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.866 anon_hugepages=0 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30263940 kB' 'MemAvailable: 33829224 kB' 'Buffers: 2704 kB' 'Cached: 9292564 kB' 'SwapCached: 0 kB' 'Active: 6306228 kB' 'Inactive: 3505240 kB' 'Active(anon): 5916688 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519520 kB' 'Mapped: 155136 kB' 'Shmem: 5400488 kB' 'KReclaimable: 164656 kB' 'Slab: 477420 kB' 'SReclaimable: 164656 kB' 'SUnreclaim: 312764 kB' 'KernelStack: 12320 kB' 'PageTables: 7420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7037648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195552 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.866 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.867 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21299624 kB' 'MemUsed: 3272732 kB' 'SwapCached: 0 kB' 'Active: 1576792 kB' 'Inactive: 72212 kB' 'Active(anon): 1447524 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1336508 kB' 'Mapped: 75708 kB' 'AnonPages: 315572 kB' 'Shmem: 1135028 kB' 'KernelStack: 6792 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 48884 kB' 'Slab: 192696 kB' 'SReclaimable: 48884 kB' 'SUnreclaim: 143812 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.868 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 8963812 kB' 'MemUsed: 10490504 kB' 'SwapCached: 0 kB' 'Active: 4729580 kB' 'Inactive: 3433028 kB' 'Active(anon): 4469308 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3433028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7958764 kB' 'Mapped: 79428 kB' 'AnonPages: 204092 kB' 'Shmem: 4265464 kB' 'KernelStack: 5528 kB' 'PageTables: 3380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115772 kB' 'Slab: 284724 kB' 'SReclaimable: 115772 kB' 'SUnreclaim: 168952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.869 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:25.870 node0=512 expecting 512 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:25.870 node1=512 expecting 512 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:25.870 00:04:25.870 real 0m1.488s 00:04:25.870 user 0m0.632s 00:04:25.870 sys 0m0.822s 00:04:25.870 15:41:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.871 15:41:23 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.871 ************************************ 00:04:25.871 END TEST even_2G_alloc 00:04:25.871 ************************************ 00:04:25.871 15:41:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:25.871 15:41:23 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:25.871 15:41:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.871 15:41:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.871 15:41:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.871 ************************************ 00:04:25.871 START TEST odd_alloc 00:04:25.871 ************************************ 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.871 15:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.252 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:27.252 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:27.252 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:27.252 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:27.252 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:27.252 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:27.252 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:27.252 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:27.252 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:27.252 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:27.252 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:27.252 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:27.252 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:27.252 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:27.252 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:27.252 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:27.252 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30253300 kB' 'MemAvailable: 33818580 kB' 'Buffers: 2704 kB' 'Cached: 9292804 kB' 'SwapCached: 0 kB' 'Active: 6303660 kB' 'Inactive: 3505240 kB' 'Active(anon): 5914120 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516656 kB' 'Mapped: 154356 kB' 'Shmem: 5400728 kB' 'KReclaimable: 164648 kB' 'Slab: 477688 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 313040 kB' 'KernelStack: 12224 kB' 'PageTables: 7004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7024520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195616 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.252 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30253856 kB' 'MemAvailable: 33819136 kB' 'Buffers: 2704 kB' 'Cached: 9292804 kB' 'SwapCached: 0 kB' 'Active: 6303676 kB' 'Inactive: 3505240 kB' 'Active(anon): 5914136 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516636 kB' 'Mapped: 154300 kB' 'Shmem: 5400728 kB' 'KReclaimable: 164648 kB' 'Slab: 477676 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 313028 kB' 'KernelStack: 12256 kB' 'PageTables: 7096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7024536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.253 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.254 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30254348 kB' 'MemAvailable: 33819628 kB' 'Buffers: 2704 kB' 'Cached: 9292828 kB' 'SwapCached: 0 kB' 'Active: 6303500 kB' 'Inactive: 3505240 kB' 'Active(anon): 5913960 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516440 kB' 'Mapped: 154224 kB' 'Shmem: 5400752 kB' 'KReclaimable: 164648 kB' 'Slab: 477652 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 313004 kB' 'KernelStack: 12272 kB' 'PageTables: 7144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7024556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.255 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.256 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.517 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:27.518 nr_hugepages=1025 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.518 resv_hugepages=0 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.518 surplus_hugepages=0 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.518 anon_hugepages=0 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.518 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30254380 kB' 'MemAvailable: 33819660 kB' 'Buffers: 2704 kB' 'Cached: 9292848 kB' 'SwapCached: 0 kB' 'Active: 6303480 kB' 'Inactive: 3505240 kB' 'Active(anon): 5913940 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516440 kB' 'Mapped: 154224 kB' 'Shmem: 5400772 kB' 'KReclaimable: 164648 kB' 'Slab: 477652 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 313004 kB' 'KernelStack: 12272 kB' 'PageTables: 7144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352340 kB' 'Committed_AS: 7024576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.519 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21283700 kB' 'MemUsed: 3288656 kB' 'SwapCached: 0 kB' 'Active: 1576844 kB' 'Inactive: 72212 kB' 'Active(anon): 1447576 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1336580 kB' 'Mapped: 75720 kB' 'AnonPages: 315620 kB' 'Shmem: 1135100 kB' 'KernelStack: 6792 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 48876 kB' 'Slab: 192960 kB' 'SReclaimable: 48876 kB' 'SUnreclaim: 144084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.520 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 8970680 kB' 'MemUsed: 10483636 kB' 'SwapCached: 0 kB' 'Active: 4726664 kB' 'Inactive: 3433028 kB' 'Active(anon): 4466392 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3433028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7958996 kB' 'Mapped: 78504 kB' 'AnonPages: 200812 kB' 'Shmem: 4265696 kB' 'KernelStack: 5480 kB' 'PageTables: 3092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115772 kB' 'Slab: 284692 kB' 'SReclaimable: 115772 kB' 'SUnreclaim: 168920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:27.521 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.522 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:27.523 node0=512 expecting 513 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:27.523 node1=513 expecting 512 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:27.523 00:04:27.523 real 0m1.527s 00:04:27.523 user 0m0.640s 00:04:27.523 sys 0m0.862s 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.523 15:41:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:27.523 ************************************ 00:04:27.523 END TEST odd_alloc 00:04:27.523 ************************************ 00:04:27.523 15:41:24 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:27.523 15:41:24 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:27.523 15:41:24 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.523 15:41:24 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.523 15:41:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.523 ************************************ 00:04:27.523 START TEST custom_alloc 00:04:27.523 ************************************ 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:27.523 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.524 15:41:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:28.908 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:28.908 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:28.908 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:28.908 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:28.908 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:28.908 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:28.908 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:28.908 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:28.908 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:28.908 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:28.908 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:28.908 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:28.908 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:28.908 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:28.908 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:28.908 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:28.908 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29201336 kB' 'MemAvailable: 32766616 kB' 'Buffers: 2704 kB' 'Cached: 9293100 kB' 'SwapCached: 0 kB' 'Active: 6304272 kB' 'Inactive: 3505240 kB' 'Active(anon): 5914732 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516940 kB' 'Mapped: 154396 kB' 'Shmem: 5401024 kB' 'KReclaimable: 164648 kB' 'Slab: 477464 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 312816 kB' 'KernelStack: 12272 kB' 'PageTables: 7160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7024808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195568 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.908 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.909 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29201336 kB' 'MemAvailable: 32766616 kB' 'Buffers: 2704 kB' 'Cached: 9293104 kB' 'SwapCached: 0 kB' 'Active: 6303852 kB' 'Inactive: 3505240 kB' 'Active(anon): 5914312 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516460 kB' 'Mapped: 154344 kB' 'Shmem: 5401028 kB' 'KReclaimable: 164648 kB' 'Slab: 477432 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 312784 kB' 'KernelStack: 12256 kB' 'PageTables: 7088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7024828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195520 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.910 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29201336 kB' 'MemAvailable: 32766616 kB' 'Buffers: 2704 kB' 'Cached: 9293104 kB' 'SwapCached: 0 kB' 'Active: 6304024 kB' 'Inactive: 3505240 kB' 'Active(anon): 5914484 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516668 kB' 'Mapped: 154344 kB' 'Shmem: 5401028 kB' 'KReclaimable: 164648 kB' 'Slab: 477432 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 312784 kB' 'KernelStack: 12272 kB' 'PageTables: 7140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7024848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195520 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.911 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.912 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:28.913 nr_hugepages=1536 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:28.913 resv_hugepages=0 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:28.913 surplus_hugepages=0 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:28.913 anon_hugepages=0 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 29201712 kB' 'MemAvailable: 32766992 kB' 'Buffers: 2704 kB' 'Cached: 9293144 kB' 'SwapCached: 0 kB' 'Active: 6303896 kB' 'Inactive: 3505240 kB' 'Active(anon): 5914356 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516500 kB' 'Mapped: 154344 kB' 'Shmem: 5401068 kB' 'KReclaimable: 164648 kB' 'Slab: 477432 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 312784 kB' 'KernelStack: 12272 kB' 'PageTables: 7140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829076 kB' 'Committed_AS: 7024868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195520 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.913 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.914 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 21284268 kB' 'MemUsed: 3288088 kB' 'SwapCached: 0 kB' 'Active: 1577008 kB' 'Inactive: 72212 kB' 'Active(anon): 1447740 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1336636 kB' 'Mapped: 75736 kB' 'AnonPages: 315728 kB' 'Shmem: 1135156 kB' 'KernelStack: 6840 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 48876 kB' 'Slab: 192868 kB' 'SReclaimable: 48876 kB' 'SUnreclaim: 143992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.915 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:28.916 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19454316 kB' 'MemFree: 7917004 kB' 'MemUsed: 11537312 kB' 'SwapCached: 0 kB' 'Active: 4727084 kB' 'Inactive: 3433028 kB' 'Active(anon): 4466812 kB' 'Inactive(anon): 0 kB' 'Active(file): 260272 kB' 'Inactive(file): 3433028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7959252 kB' 'Mapped: 78608 kB' 'AnonPages: 200984 kB' 'Shmem: 4265952 kB' 'KernelStack: 5448 kB' 'PageTables: 3064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115772 kB' 'Slab: 284564 kB' 'SReclaimable: 115772 kB' 'SUnreclaim: 168792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.177 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.179 node0=512 expecting 512 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:29.179 node1=1024 expecting 1024 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:29.179 00:04:29.179 real 0m1.541s 00:04:29.179 user 0m0.660s 00:04:29.179 sys 0m0.853s 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.179 15:41:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.179 ************************************ 00:04:29.179 END TEST custom_alloc 00:04:29.179 ************************************ 00:04:29.179 15:41:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.179 15:41:26 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:29.179 15:41:26 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.179 15:41:26 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.179 15:41:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.179 ************************************ 00:04:29.179 START TEST no_shrink_alloc 00:04:29.179 ************************************ 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.179 15:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.115 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:30.115 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:30.115 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:30.115 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:30.115 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:30.115 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:30.115 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:30.115 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:30.115 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:30.115 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:30.115 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:30.115 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:30.115 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:30.115 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:30.115 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:30.377 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:30.377 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.377 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30243068 kB' 'MemAvailable: 33808348 kB' 'Buffers: 2704 kB' 'Cached: 9301428 kB' 'SwapCached: 0 kB' 'Active: 6312512 kB' 'Inactive: 3505240 kB' 'Active(anon): 5922972 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516764 kB' 'Mapped: 154384 kB' 'Shmem: 5409352 kB' 'KReclaimable: 164648 kB' 'Slab: 477260 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 312612 kB' 'KernelStack: 12272 kB' 'PageTables: 7080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7033464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195632 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.378 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30242832 kB' 'MemAvailable: 33808112 kB' 'Buffers: 2704 kB' 'Cached: 9301432 kB' 'SwapCached: 0 kB' 'Active: 6312408 kB' 'Inactive: 3505240 kB' 'Active(anon): 5922868 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516728 kB' 'Mapped: 154272 kB' 'Shmem: 5409356 kB' 'KReclaimable: 164648 kB' 'Slab: 477220 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 312572 kB' 'KernelStack: 12304 kB' 'PageTables: 7132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7033480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195584 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.379 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.380 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30242580 kB' 'MemAvailable: 33807860 kB' 'Buffers: 2704 kB' 'Cached: 9301448 kB' 'SwapCached: 0 kB' 'Active: 6312392 kB' 'Inactive: 3505240 kB' 'Active(anon): 5922852 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516696 kB' 'Mapped: 154272 kB' 'Shmem: 5409372 kB' 'KReclaimable: 164648 kB' 'Slab: 477252 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 312604 kB' 'KernelStack: 12320 kB' 'PageTables: 7156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7033504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195600 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.381 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.382 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.383 nr_hugepages=1024 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.383 resv_hugepages=0 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.383 surplus_hugepages=0 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.383 anon_hugepages=0 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.383 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30242580 kB' 'MemAvailable: 33807860 kB' 'Buffers: 2704 kB' 'Cached: 9301468 kB' 'SwapCached: 0 kB' 'Active: 6312956 kB' 'Inactive: 3505240 kB' 'Active(anon): 5923416 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517304 kB' 'Mapped: 154272 kB' 'Shmem: 5409392 kB' 'KReclaimable: 164648 kB' 'Slab: 477252 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 312604 kB' 'KernelStack: 12336 kB' 'PageTables: 7200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7034692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195584 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.384 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.385 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20241828 kB' 'MemUsed: 4330528 kB' 'SwapCached: 0 kB' 'Active: 1585724 kB' 'Inactive: 72212 kB' 'Active(anon): 1456456 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1344836 kB' 'Mapped: 75744 kB' 'AnonPages: 316264 kB' 'Shmem: 1143356 kB' 'KernelStack: 7000 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 48876 kB' 'Slab: 192840 kB' 'SReclaimable: 48876 kB' 'SUnreclaim: 143964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.645 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.646 node0=1024 expecting 1024 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.646 15:41:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.581 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:31.581 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:31.581 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:31.581 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:31.581 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:31.581 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:31.581 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:31.581 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:31.581 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:31.581 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:31.581 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:31.581 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:31.581 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:31.581 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:31.581 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:31.581 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:31.581 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:31.845 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.845 15:41:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30244140 kB' 'MemAvailable: 33809420 kB' 'Buffers: 2704 kB' 'Cached: 9301532 kB' 'SwapCached: 0 kB' 'Active: 6312972 kB' 'Inactive: 3505240 kB' 'Active(anon): 5923432 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517140 kB' 'Mapped: 154320 kB' 'Shmem: 5409456 kB' 'KReclaimable: 164648 kB' 'Slab: 477108 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 312460 kB' 'KernelStack: 12336 kB' 'PageTables: 7196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7033860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195696 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.845 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.846 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30245476 kB' 'MemAvailable: 33810756 kB' 'Buffers: 2704 kB' 'Cached: 9301536 kB' 'SwapCached: 0 kB' 'Active: 6312980 kB' 'Inactive: 3505240 kB' 'Active(anon): 5923440 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517244 kB' 'Mapped: 154332 kB' 'Shmem: 5409460 kB' 'KReclaimable: 164648 kB' 'Slab: 477156 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 312508 kB' 'KernelStack: 12336 kB' 'PageTables: 7216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7033876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.847 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.848 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30245520 kB' 'MemAvailable: 33810800 kB' 'Buffers: 2704 kB' 'Cached: 9301540 kB' 'SwapCached: 0 kB' 'Active: 6312256 kB' 'Inactive: 3505240 kB' 'Active(anon): 5922716 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516484 kB' 'Mapped: 154252 kB' 'Shmem: 5409464 kB' 'KReclaimable: 164648 kB' 'Slab: 477140 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 312492 kB' 'KernelStack: 12304 kB' 'PageTables: 7108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7033904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.849 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.850 nr_hugepages=1024 00:04:31.850 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.850 resv_hugepages=0 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.851 surplus_hugepages=0 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.851 anon_hugepages=0 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026672 kB' 'MemFree: 30245520 kB' 'MemAvailable: 33810800 kB' 'Buffers: 2704 kB' 'Cached: 9301544 kB' 'SwapCached: 0 kB' 'Active: 6312388 kB' 'Inactive: 3505240 kB' 'Active(anon): 5922848 kB' 'Inactive(anon): 0 kB' 'Active(file): 389540 kB' 'Inactive(file): 3505240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516616 kB' 'Mapped: 154252 kB' 'Shmem: 5409468 kB' 'KReclaimable: 164648 kB' 'Slab: 477140 kB' 'SReclaimable: 164648 kB' 'SUnreclaim: 312492 kB' 'KernelStack: 12288 kB' 'PageTables: 7056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353364 kB' 'Committed_AS: 7033924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 30912 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1211996 kB' 'DirectMap2M: 10242048 kB' 'DirectMap1G: 40894464 kB' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.851 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.852 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24572356 kB' 'MemFree: 20246052 kB' 'MemUsed: 4326304 kB' 'SwapCached: 0 kB' 'Active: 1585436 kB' 'Inactive: 72212 kB' 'Active(anon): 1456168 kB' 'Inactive(anon): 0 kB' 'Active(file): 129268 kB' 'Inactive(file): 72212 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1344908 kB' 'Mapped: 75748 kB' 'AnonPages: 315892 kB' 'Shmem: 1143428 kB' 'KernelStack: 6808 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 48876 kB' 'Slab: 192800 kB' 'SReclaimable: 48876 kB' 'SUnreclaim: 143924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.853 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:31.854 node0=1024 expecting 1024 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.854 00:04:31.854 real 0m2.861s 00:04:31.854 user 0m1.165s 00:04:31.854 sys 0m1.638s 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.854 15:41:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:31.854 ************************************ 00:04:31.854 END TEST no_shrink_alloc 00:04:31.854 ************************************ 00:04:32.112 15:41:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:32.112 15:41:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:32.112 00:04:32.112 real 0m11.950s 00:04:32.112 user 0m4.613s 00:04:32.112 sys 0m6.287s 00:04:32.112 15:41:29 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.112 15:41:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.112 ************************************ 00:04:32.112 END TEST hugepages 00:04:32.112 ************************************ 00:04:32.112 15:41:29 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:32.112 15:41:29 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:32.112 15:41:29 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.112 15:41:29 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.112 15:41:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.112 ************************************ 00:04:32.112 START TEST driver 00:04:32.112 ************************************ 00:04:32.112 15:41:29 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:32.112 * Looking for test storage... 00:04:32.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:32.112 15:41:29 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:32.112 15:41:29 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.112 15:41:29 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.637 15:41:31 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:34.637 15:41:31 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.637 15:41:31 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.637 15:41:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:34.637 ************************************ 00:04:34.637 START TEST guess_driver 00:04:34.637 ************************************ 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:34.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:34.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:34.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:34.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:34.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:34.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:34.637 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:34.637 Looking for driver=vfio-pci 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.637 15:41:31 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.012 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.949 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.949 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.949 15:41:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.949 15:41:34 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:36.949 15:41:34 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:36.949 15:41:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:36.949 15:41:34 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.476 00:04:39.476 real 0m4.960s 00:04:39.476 user 0m1.147s 00:04:39.476 sys 0m1.907s 00:04:39.476 15:41:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.476 15:41:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:39.476 ************************************ 00:04:39.476 END TEST guess_driver 00:04:39.476 ************************************ 00:04:39.476 15:41:36 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:39.476 00:04:39.476 real 0m7.495s 00:04:39.476 user 0m1.659s 00:04:39.476 sys 0m2.899s 00:04:39.476 15:41:36 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.476 15:41:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:39.476 ************************************ 00:04:39.476 END TEST driver 00:04:39.476 ************************************ 00:04:39.476 15:41:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:39.476 15:41:36 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:39.476 15:41:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.476 15:41:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.476 15:41:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.476 ************************************ 00:04:39.476 START TEST devices 00:04:39.476 ************************************ 00:04:39.476 15:41:36 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:39.734 * Looking for test storage... 00:04:39.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.734 15:41:36 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:39.734 15:41:36 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:39.734 15:41:36 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.734 15:41:36 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.114 15:41:38 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:41.114 15:41:38 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:41.114 15:41:38 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:41.114 15:41:38 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:41.114 15:41:38 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:41.114 15:41:38 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:41.114 15:41:38 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:41.114 15:41:38 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.114 15:41:38 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:41.114 15:41:38 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:41.114 15:41:38 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:41.114 15:41:38 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:41.114 15:41:38 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:41.114 15:41:38 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:41.114 15:41:38 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:41.114 15:41:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:41.114 15:41:38 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:41.114 15:41:38 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:04:41.114 15:41:38 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:41.114 15:41:38 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:41.114 15:41:38 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:41.114 15:41:38 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:41.114 No valid GPT data, bailing 00:04:41.114 15:41:38 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:41.373 15:41:38 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:41.373 15:41:38 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:41.373 15:41:38 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:41.373 15:41:38 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:41.373 15:41:38 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:41.373 15:41:38 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:41.373 15:41:38 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:41.373 15:41:38 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:41.373 15:41:38 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:04:41.373 15:41:38 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:41.373 15:41:38 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:41.373 15:41:38 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:41.373 15:41:38 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.373 15:41:38 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.373 15:41:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:41.373 ************************************ 00:04:41.373 START TEST nvme_mount 00:04:41.373 ************************************ 00:04:41.373 15:41:38 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:41.373 15:41:38 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:41.373 15:41:38 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:41.373 15:41:38 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.373 15:41:38 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:41.373 15:41:38 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:41.373 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:41.373 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:41.373 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:41.373 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:41.373 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:41.373 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:41.374 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:41.374 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.374 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:41.374 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:41.374 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.374 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:41.374 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:41.374 15:41:38 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:42.314 Creating new GPT entries in memory. 00:04:42.314 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:42.314 other utilities. 00:04:42.314 15:41:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:42.314 15:41:39 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.314 15:41:39 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.314 15:41:39 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.314 15:41:39 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:43.252 Creating new GPT entries in memory. 00:04:43.252 The operation has completed successfully. 00:04:43.252 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:43.252 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.252 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 623558 00:04:43.252 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.252 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:43.252 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.252 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:43.252 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:43.252 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.511 15:41:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:44.448 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.706 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.706 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:44.706 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.706 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.706 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.706 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:44.706 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.706 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.706 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.706 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:44.706 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:44.707 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:44.707 15:41:41 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:44.967 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:44.967 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:44.967 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:44.967 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.967 15:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.968 15:41:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.346 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.346 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.347 15:41:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.724 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.724 00:04:47.724 real 0m6.416s 00:04:47.724 user 0m1.466s 00:04:47.724 sys 0m2.580s 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.724 15:41:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:47.724 ************************************ 00:04:47.724 END TEST nvme_mount 00:04:47.724 ************************************ 00:04:47.724 15:41:44 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:47.724 15:41:44 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:47.724 15:41:44 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.724 15:41:44 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.724 15:41:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.724 ************************************ 00:04:47.724 START TEST dm_mount 00:04:47.724 ************************************ 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:47.724 15:41:44 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:48.664 Creating new GPT entries in memory. 00:04:48.664 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:48.664 other utilities. 00:04:48.664 15:41:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:48.664 15:41:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.664 15:41:45 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.664 15:41:45 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.664 15:41:45 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:50.076 Creating new GPT entries in memory. 00:04:50.076 The operation has completed successfully. 00:04:50.076 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:50.076 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.076 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:50.076 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:50.076 15:41:46 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:51.016 The operation has completed successfully. 00:04:51.016 15:41:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:51.016 15:41:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.016 15:41:47 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 625963 00:04:51.016 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:51.016 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.016 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:51.016 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:51.016 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:51.016 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:51.016 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:51.016 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:51.016 15:41:47 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:51.016 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:51.017 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:51.017 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:51.017 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.017 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:51.017 15:41:48 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:51.017 15:41:48 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.017 15:41:48 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:51.954 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.213 15:41:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:53.148 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:53.407 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:53.407 00:04:53.407 real 0m5.715s 00:04:53.407 user 0m0.944s 00:04:53.407 sys 0m1.658s 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.407 15:41:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:53.407 ************************************ 00:04:53.407 END TEST dm_mount 00:04:53.407 ************************************ 00:04:53.407 15:41:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:53.407 15:41:50 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:53.407 15:41:50 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:53.407 15:41:50 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.407 15:41:50 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.407 15:41:50 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:53.407 15:41:50 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.407 15:41:50 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.665 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:53.666 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:53.666 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.666 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:53.666 15:41:50 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:53.666 15:41:50 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.666 15:41:50 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.666 15:41:50 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.666 15:41:50 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:53.666 15:41:50 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.666 15:41:50 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:53.666 00:04:53.666 real 0m14.160s 00:04:53.666 user 0m3.105s 00:04:53.666 sys 0m5.359s 00:04:53.666 15:41:50 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.666 15:41:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:53.666 ************************************ 00:04:53.666 END TEST devices 00:04:53.666 ************************************ 00:04:53.666 15:41:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:53.666 00:04:53.666 real 0m44.720s 00:04:53.666 user 0m12.833s 00:04:53.666 sys 0m20.224s 00:04:53.666 15:41:50 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.666 15:41:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.666 ************************************ 00:04:53.666 END TEST setup.sh 00:04:53.666 ************************************ 00:04:53.923 15:41:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.923 15:41:50 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:55.298 Hugepages 00:04:55.298 node hugesize free / total 00:04:55.298 node0 1048576kB 0 / 0 00:04:55.298 node0 2048kB 2048 / 2048 00:04:55.298 node1 1048576kB 0 / 0 00:04:55.298 node1 2048kB 0 / 0 00:04:55.298 00:04:55.298 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:55.298 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:55.298 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:55.298 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:55.298 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:55.298 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:55.298 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:55.298 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:55.298 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:55.298 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:55.298 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:55.298 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:55.298 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:55.298 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:55.298 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:55.298 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:55.298 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:55.298 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:55.298 15:41:52 -- spdk/autotest.sh@130 -- # uname -s 00:04:55.298 15:41:52 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:55.298 15:41:52 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:55.298 15:41:52 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.673 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:56.673 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:56.673 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:56.673 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:56.673 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:56.673 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:56.673 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:56.673 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:56.673 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:56.673 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:56.673 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:56.673 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:56.673 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:56.673 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:56.673 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:56.673 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:57.611 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.611 15:41:54 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:58.593 15:41:55 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:58.593 15:41:55 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:58.593 15:41:55 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.593 15:41:55 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:58.593 15:41:55 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.593 15:41:55 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.593 15:41:55 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.593 15:41:55 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.593 15:41:55 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.593 15:41:55 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:58.593 15:41:55 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:04:58.593 15:41:55 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.972 Waiting for block devices as requested 00:04:59.972 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:04:59.972 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:59.972 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:59.972 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:00.232 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:00.232 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:00.232 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:00.490 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:00.490 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:00.490 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:00.490 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:00.747 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:00.747 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:00.747 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:00.747 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:01.005 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:01.005 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:01.005 15:41:58 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:01.005 15:41:58 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:05:01.005 15:41:58 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:01.006 15:41:58 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:05:01.006 15:41:58 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:01.006 15:41:58 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:05:01.006 15:41:58 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:01.006 15:41:58 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:01.006 15:41:58 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:01.006 15:41:58 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:01.006 15:41:58 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:01.006 15:41:58 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:01.006 15:41:58 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:01.263 15:41:58 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:01.263 15:41:58 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:01.263 15:41:58 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:01.263 15:41:58 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:01.263 15:41:58 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:01.263 15:41:58 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:01.263 15:41:58 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:01.263 15:41:58 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:01.263 15:41:58 -- common/autotest_common.sh@1557 -- # continue 00:05:01.263 15:41:58 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:01.263 15:41:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:01.263 15:41:58 -- common/autotest_common.sh@10 -- # set +x 00:05:01.263 15:41:58 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:01.263 15:41:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.263 15:41:58 -- common/autotest_common.sh@10 -- # set +x 00:05:01.263 15:41:58 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:02.640 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:02.640 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:02.640 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:02.640 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:02.640 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:02.640 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:02.640 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:02.640 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:02.640 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:02.640 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:02.640 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:02.640 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:02.640 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:02.640 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:02.640 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:02.640 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:03.577 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:03.577 15:42:00 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:03.577 15:42:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.577 15:42:00 -- common/autotest_common.sh@10 -- # set +x 00:05:03.577 15:42:00 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:03.577 15:42:00 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:03.577 15:42:00 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:03.577 15:42:00 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:03.577 15:42:00 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:03.577 15:42:00 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:03.577 15:42:00 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:03.577 15:42:00 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:03.577 15:42:00 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.577 15:42:00 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:03.577 15:42:00 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:03.837 15:42:00 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:03.837 15:42:00 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:05:03.837 15:42:00 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:03.837 15:42:00 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:05:03.837 15:42:00 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:03.837 15:42:00 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:03.837 15:42:00 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:03.837 15:42:00 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:05:03.837 15:42:00 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:05:03.837 15:42:00 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=631356 00:05:03.837 15:42:00 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.837 15:42:00 -- common/autotest_common.sh@1598 -- # waitforlisten 631356 00:05:03.837 15:42:00 -- common/autotest_common.sh@829 -- # '[' -z 631356 ']' 00:05:03.837 15:42:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.837 15:42:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.837 15:42:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.837 15:42:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.837 15:42:00 -- common/autotest_common.sh@10 -- # set +x 00:05:03.837 [2024-07-12 15:42:00.949242] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:03.837 [2024-07-12 15:42:00.949324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid631356 ] 00:05:03.837 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.837 [2024-07-12 15:42:01.006141] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.837 [2024-07-12 15:42:01.114553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.768 15:42:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.768 15:42:01 -- common/autotest_common.sh@862 -- # return 0 00:05:04.768 15:42:01 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:04.768 15:42:01 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:04.768 15:42:01 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:05:08.047 nvme0n1 00:05:08.047 15:42:04 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:08.047 [2024-07-12 15:42:05.179750] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:08.047 [2024-07-12 15:42:05.179818] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:08.047 request: 00:05:08.047 { 00:05:08.047 "nvme_ctrlr_name": "nvme0", 00:05:08.047 "password": "test", 00:05:08.047 "method": "bdev_nvme_opal_revert", 00:05:08.047 "req_id": 1 00:05:08.047 } 00:05:08.047 Got JSON-RPC error response 00:05:08.047 response: 00:05:08.047 { 00:05:08.047 "code": -32603, 00:05:08.047 "message": "Internal error" 00:05:08.047 } 00:05:08.047 15:42:05 -- common/autotest_common.sh@1604 -- # true 00:05:08.047 15:42:05 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:08.047 15:42:05 -- common/autotest_common.sh@1608 -- # killprocess 631356 00:05:08.047 15:42:05 -- common/autotest_common.sh@948 -- # '[' -z 631356 ']' 00:05:08.047 15:42:05 -- common/autotest_common.sh@952 -- # kill -0 631356 00:05:08.047 15:42:05 -- common/autotest_common.sh@953 -- # uname 00:05:08.047 15:42:05 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:08.047 15:42:05 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 631356 00:05:08.047 15:42:05 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:08.047 15:42:05 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:08.047 15:42:05 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 631356' 00:05:08.047 killing process with pid 631356 00:05:08.047 15:42:05 -- common/autotest_common.sh@967 -- # kill 631356 00:05:08.047 15:42:05 -- common/autotest_common.sh@972 -- # wait 631356 00:05:09.946 15:42:07 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:09.946 15:42:07 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:09.946 15:42:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:09.946 15:42:07 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:09.946 15:42:07 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:09.946 15:42:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.946 15:42:07 -- common/autotest_common.sh@10 -- # set +x 00:05:09.946 15:42:07 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:09.946 15:42:07 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:09.946 15:42:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.946 15:42:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.946 15:42:07 -- common/autotest_common.sh@10 -- # set +x 00:05:09.946 ************************************ 00:05:09.946 START TEST env 00:05:09.946 ************************************ 00:05:09.946 15:42:07 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:09.946 * Looking for test storage... 00:05:09.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:09.946 15:42:07 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:09.946 15:42:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.946 15:42:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.946 15:42:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.946 ************************************ 00:05:09.946 START TEST env_memory 00:05:09.946 ************************************ 00:05:09.946 15:42:07 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:09.946 00:05:09.946 00:05:09.946 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.946 http://cunit.sourceforge.net/ 00:05:09.946 00:05:09.946 00:05:09.946 Suite: memory 00:05:09.946 Test: alloc and free memory map ...[2024-07-12 15:42:07.161223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:09.946 passed 00:05:09.946 Test: mem map translation ...[2024-07-12 15:42:07.182047] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:09.946 [2024-07-12 15:42:07.182070] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:09.946 [2024-07-12 15:42:07.182112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:09.946 [2024-07-12 15:42:07.182124] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:09.946 passed 00:05:09.946 Test: mem map registration ...[2024-07-12 15:42:07.225590] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:09.946 [2024-07-12 15:42:07.225612] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:09.946 passed 00:05:10.203 Test: mem map adjacent registrations ...passed 00:05:10.203 00:05:10.203 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.203 suites 1 1 n/a 0 0 00:05:10.203 tests 4 4 4 0 0 00:05:10.203 asserts 152 152 152 0 n/a 00:05:10.203 00:05:10.203 Elapsed time = 0.143 seconds 00:05:10.203 00:05:10.203 real 0m0.150s 00:05:10.203 user 0m0.139s 00:05:10.203 sys 0m0.010s 00:05:10.203 15:42:07 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.203 15:42:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:10.203 ************************************ 00:05:10.203 END TEST env_memory 00:05:10.203 ************************************ 00:05:10.203 15:42:07 env -- common/autotest_common.sh@1142 -- # return 0 00:05:10.203 15:42:07 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.203 15:42:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.203 15:42:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.203 15:42:07 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.203 ************************************ 00:05:10.203 START TEST env_vtophys 00:05:10.203 ************************************ 00:05:10.203 15:42:07 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:10.203 EAL: lib.eal log level changed from notice to debug 00:05:10.203 EAL: Detected lcore 0 as core 0 on socket 0 00:05:10.203 EAL: Detected lcore 1 as core 1 on socket 0 00:05:10.203 EAL: Detected lcore 2 as core 2 on socket 0 00:05:10.203 EAL: Detected lcore 3 as core 3 on socket 0 00:05:10.203 EAL: Detected lcore 4 as core 4 on socket 0 00:05:10.203 EAL: Detected lcore 5 as core 5 on socket 0 00:05:10.203 EAL: Detected lcore 6 as core 8 on socket 0 00:05:10.204 EAL: Detected lcore 7 as core 9 on socket 0 00:05:10.204 EAL: Detected lcore 8 as core 10 on socket 0 00:05:10.204 EAL: Detected lcore 9 as core 11 on socket 0 00:05:10.204 EAL: Detected lcore 10 as core 12 on socket 0 00:05:10.204 EAL: Detected lcore 11 as core 13 on socket 0 00:05:10.204 EAL: Detected lcore 12 as core 0 on socket 1 00:05:10.204 EAL: Detected lcore 13 as core 1 on socket 1 00:05:10.204 EAL: Detected lcore 14 as core 2 on socket 1 00:05:10.204 EAL: Detected lcore 15 as core 3 on socket 1 00:05:10.204 EAL: Detected lcore 16 as core 4 on socket 1 00:05:10.204 EAL: Detected lcore 17 as core 5 on socket 1 00:05:10.204 EAL: Detected lcore 18 as core 8 on socket 1 00:05:10.204 EAL: Detected lcore 19 as core 9 on socket 1 00:05:10.204 EAL: Detected lcore 20 as core 10 on socket 1 00:05:10.204 EAL: Detected lcore 21 as core 11 on socket 1 00:05:10.204 EAL: Detected lcore 22 as core 12 on socket 1 00:05:10.204 EAL: Detected lcore 23 as core 13 on socket 1 00:05:10.204 EAL: Detected lcore 24 as core 0 on socket 0 00:05:10.204 EAL: Detected lcore 25 as core 1 on socket 0 00:05:10.204 EAL: Detected lcore 26 as core 2 on socket 0 00:05:10.204 EAL: Detected lcore 27 as core 3 on socket 0 00:05:10.204 EAL: Detected lcore 28 as core 4 on socket 0 00:05:10.204 EAL: Detected lcore 29 as core 5 on socket 0 00:05:10.204 EAL: Detected lcore 30 as core 8 on socket 0 00:05:10.204 EAL: Detected lcore 31 as core 9 on socket 0 00:05:10.204 EAL: Detected lcore 32 as core 10 on socket 0 00:05:10.204 EAL: Detected lcore 33 as core 11 on socket 0 00:05:10.204 EAL: Detected lcore 34 as core 12 on socket 0 00:05:10.204 EAL: Detected lcore 35 as core 13 on socket 0 00:05:10.204 EAL: Detected lcore 36 as core 0 on socket 1 00:05:10.204 EAL: Detected lcore 37 as core 1 on socket 1 00:05:10.204 EAL: Detected lcore 38 as core 2 on socket 1 00:05:10.204 EAL: Detected lcore 39 as core 3 on socket 1 00:05:10.204 EAL: Detected lcore 40 as core 4 on socket 1 00:05:10.204 EAL: Detected lcore 41 as core 5 on socket 1 00:05:10.204 EAL: Detected lcore 42 as core 8 on socket 1 00:05:10.204 EAL: Detected lcore 43 as core 9 on socket 1 00:05:10.204 EAL: Detected lcore 44 as core 10 on socket 1 00:05:10.204 EAL: Detected lcore 45 as core 11 on socket 1 00:05:10.204 EAL: Detected lcore 46 as core 12 on socket 1 00:05:10.204 EAL: Detected lcore 47 as core 13 on socket 1 00:05:10.204 EAL: Maximum logical cores by configuration: 128 00:05:10.204 EAL: Detected CPU lcores: 48 00:05:10.204 EAL: Detected NUMA nodes: 2 00:05:10.204 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:10.204 EAL: Detected shared linkage of DPDK 00:05:10.204 EAL: No shared files mode enabled, IPC will be disabled 00:05:10.204 EAL: Bus pci wants IOVA as 'DC' 00:05:10.204 EAL: Buses did not request a specific IOVA mode. 00:05:10.204 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:10.204 EAL: Selected IOVA mode 'VA' 00:05:10.204 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.204 EAL: Probing VFIO support... 00:05:10.204 EAL: IOMMU type 1 (Type 1) is supported 00:05:10.204 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:10.204 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:10.204 EAL: VFIO support initialized 00:05:10.204 EAL: Ask a virtual area of 0x2e000 bytes 00:05:10.204 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:10.204 EAL: Setting up physically contiguous memory... 00:05:10.204 EAL: Setting maximum number of open files to 524288 00:05:10.204 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:10.204 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:10.204 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:10.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.204 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:10.204 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.204 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:10.204 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:10.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.204 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:10.204 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.204 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:10.204 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:10.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.204 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:10.204 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.204 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:10.204 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:10.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.204 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:10.204 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:10.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.204 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:10.204 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:10.204 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:10.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.204 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:10.204 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.204 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:10.204 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:10.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.204 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:10.204 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.204 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:10.204 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:10.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.204 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:10.204 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.204 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:10.204 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:10.204 EAL: Ask a virtual area of 0x61000 bytes 00:05:10.204 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:10.204 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:10.204 EAL: Ask a virtual area of 0x400000000 bytes 00:05:10.204 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:10.204 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:10.204 EAL: Hugepages will be freed exactly as allocated. 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: TSC frequency is ~2700000 KHz 00:05:10.204 EAL: Main lcore 0 is ready (tid=7f3377f21a00;cpuset=[0]) 00:05:10.204 EAL: Trying to obtain current memory policy. 00:05:10.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.204 EAL: Restoring previous memory policy: 0 00:05:10.204 EAL: request: mp_malloc_sync 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: Heap on socket 0 was expanded by 2MB 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:10.204 EAL: Mem event callback 'spdk:(nil)' registered 00:05:10.204 00:05:10.204 00:05:10.204 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.204 http://cunit.sourceforge.net/ 00:05:10.204 00:05:10.204 00:05:10.204 Suite: components_suite 00:05:10.204 Test: vtophys_malloc_test ...passed 00:05:10.204 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:10.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.204 EAL: Restoring previous memory policy: 4 00:05:10.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.204 EAL: request: mp_malloc_sync 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: Heap on socket 0 was expanded by 4MB 00:05:10.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.204 EAL: request: mp_malloc_sync 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: Heap on socket 0 was shrunk by 4MB 00:05:10.204 EAL: Trying to obtain current memory policy. 00:05:10.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.204 EAL: Restoring previous memory policy: 4 00:05:10.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.204 EAL: request: mp_malloc_sync 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: Heap on socket 0 was expanded by 6MB 00:05:10.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.204 EAL: request: mp_malloc_sync 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: Heap on socket 0 was shrunk by 6MB 00:05:10.204 EAL: Trying to obtain current memory policy. 00:05:10.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.204 EAL: Restoring previous memory policy: 4 00:05:10.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.204 EAL: request: mp_malloc_sync 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: Heap on socket 0 was expanded by 10MB 00:05:10.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.204 EAL: request: mp_malloc_sync 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: Heap on socket 0 was shrunk by 10MB 00:05:10.204 EAL: Trying to obtain current memory policy. 00:05:10.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.204 EAL: Restoring previous memory policy: 4 00:05:10.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.204 EAL: request: mp_malloc_sync 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: Heap on socket 0 was expanded by 18MB 00:05:10.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.204 EAL: request: mp_malloc_sync 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: Heap on socket 0 was shrunk by 18MB 00:05:10.204 EAL: Trying to obtain current memory policy. 00:05:10.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.204 EAL: Restoring previous memory policy: 4 00:05:10.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.204 EAL: request: mp_malloc_sync 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: Heap on socket 0 was expanded by 34MB 00:05:10.204 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.204 EAL: request: mp_malloc_sync 00:05:10.204 EAL: No shared files mode enabled, IPC is disabled 00:05:10.204 EAL: Heap on socket 0 was shrunk by 34MB 00:05:10.204 EAL: Trying to obtain current memory policy. 00:05:10.204 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.204 EAL: Restoring previous memory policy: 4 00:05:10.205 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.205 EAL: request: mp_malloc_sync 00:05:10.205 EAL: No shared files mode enabled, IPC is disabled 00:05:10.205 EAL: Heap on socket 0 was expanded by 66MB 00:05:10.205 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.205 EAL: request: mp_malloc_sync 00:05:10.205 EAL: No shared files mode enabled, IPC is disabled 00:05:10.205 EAL: Heap on socket 0 was shrunk by 66MB 00:05:10.205 EAL: Trying to obtain current memory policy. 00:05:10.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.205 EAL: Restoring previous memory policy: 4 00:05:10.205 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.205 EAL: request: mp_malloc_sync 00:05:10.205 EAL: No shared files mode enabled, IPC is disabled 00:05:10.205 EAL: Heap on socket 0 was expanded by 130MB 00:05:10.461 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.461 EAL: request: mp_malloc_sync 00:05:10.461 EAL: No shared files mode enabled, IPC is disabled 00:05:10.461 EAL: Heap on socket 0 was shrunk by 130MB 00:05:10.461 EAL: Trying to obtain current memory policy. 00:05:10.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.461 EAL: Restoring previous memory policy: 4 00:05:10.461 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.461 EAL: request: mp_malloc_sync 00:05:10.461 EAL: No shared files mode enabled, IPC is disabled 00:05:10.461 EAL: Heap on socket 0 was expanded by 258MB 00:05:10.461 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.461 EAL: request: mp_malloc_sync 00:05:10.461 EAL: No shared files mode enabled, IPC is disabled 00:05:10.461 EAL: Heap on socket 0 was shrunk by 258MB 00:05:10.461 EAL: Trying to obtain current memory policy. 00:05:10.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.718 EAL: Restoring previous memory policy: 4 00:05:10.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.718 EAL: request: mp_malloc_sync 00:05:10.718 EAL: No shared files mode enabled, IPC is disabled 00:05:10.718 EAL: Heap on socket 0 was expanded by 514MB 00:05:10.718 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.976 EAL: request: mp_malloc_sync 00:05:10.976 EAL: No shared files mode enabled, IPC is disabled 00:05:10.976 EAL: Heap on socket 0 was shrunk by 514MB 00:05:10.976 EAL: Trying to obtain current memory policy. 00:05:10.976 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:11.233 EAL: Restoring previous memory policy: 4 00:05:11.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.233 EAL: request: mp_malloc_sync 00:05:11.233 EAL: No shared files mode enabled, IPC is disabled 00:05:11.233 EAL: Heap on socket 0 was expanded by 1026MB 00:05:11.233 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.491 EAL: request: mp_malloc_sync 00:05:11.491 EAL: No shared files mode enabled, IPC is disabled 00:05:11.491 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:11.491 passed 00:05:11.491 00:05:11.491 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.491 suites 1 1 n/a 0 0 00:05:11.491 tests 2 2 2 0 0 00:05:11.491 asserts 497 497 497 0 n/a 00:05:11.491 00:05:11.491 Elapsed time = 1.272 seconds 00:05:11.491 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.491 EAL: request: mp_malloc_sync 00:05:11.491 EAL: No shared files mode enabled, IPC is disabled 00:05:11.491 EAL: Heap on socket 0 was shrunk by 2MB 00:05:11.491 EAL: No shared files mode enabled, IPC is disabled 00:05:11.491 EAL: No shared files mode enabled, IPC is disabled 00:05:11.491 EAL: No shared files mode enabled, IPC is disabled 00:05:11.491 00:05:11.491 real 0m1.389s 00:05:11.491 user 0m0.806s 00:05:11.491 sys 0m0.545s 00:05:11.491 15:42:08 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.491 15:42:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:11.491 ************************************ 00:05:11.491 END TEST env_vtophys 00:05:11.491 ************************************ 00:05:11.491 15:42:08 env -- common/autotest_common.sh@1142 -- # return 0 00:05:11.491 15:42:08 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.491 15:42:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:11.491 15:42:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.491 15:42:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.491 ************************************ 00:05:11.491 START TEST env_pci 00:05:11.491 ************************************ 00:05:11.491 15:42:08 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:11.491 00:05:11.491 00:05:11.491 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.491 http://cunit.sourceforge.net/ 00:05:11.491 00:05:11.491 00:05:11.491 Suite: pci 00:05:11.491 Test: pci_hook ...[2024-07-12 15:42:08.769253] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 632847 has claimed it 00:05:11.748 EAL: Cannot find device (10000:00:01.0) 00:05:11.748 EAL: Failed to attach device on primary process 00:05:11.748 passed 00:05:11.748 00:05:11.748 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.748 suites 1 1 n/a 0 0 00:05:11.748 tests 1 1 1 0 0 00:05:11.748 asserts 25 25 25 0 n/a 00:05:11.748 00:05:11.748 Elapsed time = 0.023 seconds 00:05:11.748 00:05:11.748 real 0m0.036s 00:05:11.748 user 0m0.011s 00:05:11.748 sys 0m0.025s 00:05:11.748 15:42:08 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.748 15:42:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:11.748 ************************************ 00:05:11.748 END TEST env_pci 00:05:11.748 ************************************ 00:05:11.748 15:42:08 env -- common/autotest_common.sh@1142 -- # return 0 00:05:11.748 15:42:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:11.748 15:42:08 env -- env/env.sh@15 -- # uname 00:05:11.748 15:42:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:11.748 15:42:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:11.748 15:42:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.748 15:42:08 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:11.748 15:42:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.748 15:42:08 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.748 ************************************ 00:05:11.748 START TEST env_dpdk_post_init 00:05:11.748 ************************************ 00:05:11.748 15:42:08 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:11.748 EAL: Detected CPU lcores: 48 00:05:11.748 EAL: Detected NUMA nodes: 2 00:05:11.748 EAL: Detected shared linkage of DPDK 00:05:11.748 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.748 EAL: Selected IOVA mode 'VA' 00:05:11.748 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.748 EAL: VFIO support initialized 00:05:11.748 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.748 EAL: Using IOMMU type 1 (Type 1) 00:05:11.748 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:11.748 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:11.748 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:11.748 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:11.748 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:11.748 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:11.748 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:11.748 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:12.007 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:12.007 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:12.007 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:12.007 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:12.007 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:12.007 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:12.007 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:12.007 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:12.943 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:05:16.222 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:05:16.222 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:05:16.222 Starting DPDK initialization... 00:05:16.222 Starting SPDK post initialization... 00:05:16.222 SPDK NVMe probe 00:05:16.222 Attaching to 0000:82:00.0 00:05:16.222 Attached to 0000:82:00.0 00:05:16.222 Cleaning up... 00:05:16.222 00:05:16.222 real 0m4.429s 00:05:16.222 user 0m3.325s 00:05:16.222 sys 0m0.172s 00:05:16.222 15:42:13 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.222 15:42:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.222 ************************************ 00:05:16.222 END TEST env_dpdk_post_init 00:05:16.222 ************************************ 00:05:16.222 15:42:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:16.222 15:42:13 env -- env/env.sh@26 -- # uname 00:05:16.222 15:42:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:16.222 15:42:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.222 15:42:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.222 15:42:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.222 15:42:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.222 ************************************ 00:05:16.222 START TEST env_mem_callbacks 00:05:16.222 ************************************ 00:05:16.222 15:42:13 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:16.222 EAL: Detected CPU lcores: 48 00:05:16.222 EAL: Detected NUMA nodes: 2 00:05:16.222 EAL: Detected shared linkage of DPDK 00:05:16.222 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:16.222 EAL: Selected IOVA mode 'VA' 00:05:16.222 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.222 EAL: VFIO support initialized 00:05:16.222 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:16.222 00:05:16.222 00:05:16.222 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.222 http://cunit.sourceforge.net/ 00:05:16.222 00:05:16.222 00:05:16.222 Suite: memory 00:05:16.222 Test: test ... 00:05:16.222 register 0x200000200000 2097152 00:05:16.222 malloc 3145728 00:05:16.222 register 0x200000400000 4194304 00:05:16.222 buf 0x200000500000 len 3145728 PASSED 00:05:16.222 malloc 64 00:05:16.222 buf 0x2000004fff40 len 64 PASSED 00:05:16.222 malloc 4194304 00:05:16.222 register 0x200000800000 6291456 00:05:16.222 buf 0x200000a00000 len 4194304 PASSED 00:05:16.222 free 0x200000500000 3145728 00:05:16.222 free 0x2000004fff40 64 00:05:16.222 unregister 0x200000400000 4194304 PASSED 00:05:16.222 free 0x200000a00000 4194304 00:05:16.222 unregister 0x200000800000 6291456 PASSED 00:05:16.222 malloc 8388608 00:05:16.222 register 0x200000400000 10485760 00:05:16.222 buf 0x200000600000 len 8388608 PASSED 00:05:16.222 free 0x200000600000 8388608 00:05:16.222 unregister 0x200000400000 10485760 PASSED 00:05:16.222 passed 00:05:16.222 00:05:16.222 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.222 suites 1 1 n/a 0 0 00:05:16.222 tests 1 1 1 0 0 00:05:16.222 asserts 15 15 15 0 n/a 00:05:16.222 00:05:16.222 Elapsed time = 0.005 seconds 00:05:16.222 00:05:16.222 real 0m0.047s 00:05:16.222 user 0m0.016s 00:05:16.222 sys 0m0.031s 00:05:16.222 15:42:13 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.222 15:42:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:16.222 ************************************ 00:05:16.222 END TEST env_mem_callbacks 00:05:16.222 ************************************ 00:05:16.222 15:42:13 env -- common/autotest_common.sh@1142 -- # return 0 00:05:16.222 00:05:16.222 real 0m6.340s 00:05:16.222 user 0m4.408s 00:05:16.222 sys 0m0.979s 00:05:16.222 15:42:13 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.222 15:42:13 env -- common/autotest_common.sh@10 -- # set +x 00:05:16.222 ************************************ 00:05:16.222 END TEST env 00:05:16.222 ************************************ 00:05:16.222 15:42:13 -- common/autotest_common.sh@1142 -- # return 0 00:05:16.222 15:42:13 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:16.222 15:42:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.222 15:42:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.222 15:42:13 -- common/autotest_common.sh@10 -- # set +x 00:05:16.222 ************************************ 00:05:16.222 START TEST rpc 00:05:16.222 ************************************ 00:05:16.222 15:42:13 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:16.222 * Looking for test storage... 00:05:16.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.222 15:42:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=633593 00:05:16.222 15:42:13 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:16.222 15:42:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.222 15:42:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 633593 00:05:16.222 15:42:13 rpc -- common/autotest_common.sh@829 -- # '[' -z 633593 ']' 00:05:16.222 15:42:13 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.222 15:42:13 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.222 15:42:13 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.222 15:42:13 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.222 15:42:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.480 [2024-07-12 15:42:13.543767] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:16.480 [2024-07-12 15:42:13.543871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid633593 ] 00:05:16.480 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.480 [2024-07-12 15:42:13.601298] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.480 [2024-07-12 15:42:13.706830] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:16.480 [2024-07-12 15:42:13.706886] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 633593' to capture a snapshot of events at runtime. 00:05:16.481 [2024-07-12 15:42:13.706910] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:16.481 [2024-07-12 15:42:13.706921] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:16.481 [2024-07-12 15:42:13.706931] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid633593 for offline analysis/debug. 00:05:16.481 [2024-07-12 15:42:13.706958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.739 15:42:13 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.739 15:42:13 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.739 15:42:13 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.739 15:42:13 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:16.739 15:42:13 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:16.739 15:42:13 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:16.739 15:42:13 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.739 15:42:13 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.739 15:42:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.739 ************************************ 00:05:16.739 START TEST rpc_integrity 00:05:16.739 ************************************ 00:05:16.739 15:42:13 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:16.739 15:42:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.739 15:42:13 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.739 15:42:13 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.739 15:42:13 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.739 15:42:13 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.739 15:42:13 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.739 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.739 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.739 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.739 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.997 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.997 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:16.997 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.997 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.997 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.997 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.997 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.997 { 00:05:16.997 "name": "Malloc0", 00:05:16.997 "aliases": [ 00:05:16.997 "a061e7fd-70d5-4fd1-94b1-292c9bedb36b" 00:05:16.997 ], 00:05:16.997 "product_name": "Malloc disk", 00:05:16.997 "block_size": 512, 00:05:16.997 "num_blocks": 16384, 00:05:16.997 "uuid": "a061e7fd-70d5-4fd1-94b1-292c9bedb36b", 00:05:16.997 "assigned_rate_limits": { 00:05:16.997 "rw_ios_per_sec": 0, 00:05:16.997 "rw_mbytes_per_sec": 0, 00:05:16.997 "r_mbytes_per_sec": 0, 00:05:16.997 "w_mbytes_per_sec": 0 00:05:16.997 }, 00:05:16.997 "claimed": false, 00:05:16.997 "zoned": false, 00:05:16.997 "supported_io_types": { 00:05:16.997 "read": true, 00:05:16.997 "write": true, 00:05:16.997 "unmap": true, 00:05:16.997 "flush": true, 00:05:16.997 "reset": true, 00:05:16.997 "nvme_admin": false, 00:05:16.997 "nvme_io": false, 00:05:16.997 "nvme_io_md": false, 00:05:16.997 "write_zeroes": true, 00:05:16.997 "zcopy": true, 00:05:16.997 "get_zone_info": false, 00:05:16.997 "zone_management": false, 00:05:16.997 "zone_append": false, 00:05:16.997 "compare": false, 00:05:16.997 "compare_and_write": false, 00:05:16.997 "abort": true, 00:05:16.997 "seek_hole": false, 00:05:16.997 "seek_data": false, 00:05:16.997 "copy": true, 00:05:16.997 "nvme_iov_md": false 00:05:16.997 }, 00:05:16.997 "memory_domains": [ 00:05:16.997 { 00:05:16.997 "dma_device_id": "system", 00:05:16.998 "dma_device_type": 1 00:05:16.998 }, 00:05:16.998 { 00:05:16.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.998 "dma_device_type": 2 00:05:16.998 } 00:05:16.998 ], 00:05:16.998 "driver_specific": {} 00:05:16.998 } 00:05:16.998 ]' 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.998 [2024-07-12 15:42:14.085079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:16.998 [2024-07-12 15:42:14.085135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.998 [2024-07-12 15:42:14.085155] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc30540 00:05:16.998 [2024-07-12 15:42:14.085173] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.998 [2024-07-12 15:42:14.086454] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.998 [2024-07-12 15:42:14.086477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.998 Passthru0 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.998 { 00:05:16.998 "name": "Malloc0", 00:05:16.998 "aliases": [ 00:05:16.998 "a061e7fd-70d5-4fd1-94b1-292c9bedb36b" 00:05:16.998 ], 00:05:16.998 "product_name": "Malloc disk", 00:05:16.998 "block_size": 512, 00:05:16.998 "num_blocks": 16384, 00:05:16.998 "uuid": "a061e7fd-70d5-4fd1-94b1-292c9bedb36b", 00:05:16.998 "assigned_rate_limits": { 00:05:16.998 "rw_ios_per_sec": 0, 00:05:16.998 "rw_mbytes_per_sec": 0, 00:05:16.998 "r_mbytes_per_sec": 0, 00:05:16.998 "w_mbytes_per_sec": 0 00:05:16.998 }, 00:05:16.998 "claimed": true, 00:05:16.998 "claim_type": "exclusive_write", 00:05:16.998 "zoned": false, 00:05:16.998 "supported_io_types": { 00:05:16.998 "read": true, 00:05:16.998 "write": true, 00:05:16.998 "unmap": true, 00:05:16.998 "flush": true, 00:05:16.998 "reset": true, 00:05:16.998 "nvme_admin": false, 00:05:16.998 "nvme_io": false, 00:05:16.998 "nvme_io_md": false, 00:05:16.998 "write_zeroes": true, 00:05:16.998 "zcopy": true, 00:05:16.998 "get_zone_info": false, 00:05:16.998 "zone_management": false, 00:05:16.998 "zone_append": false, 00:05:16.998 "compare": false, 00:05:16.998 "compare_and_write": false, 00:05:16.998 "abort": true, 00:05:16.998 "seek_hole": false, 00:05:16.998 "seek_data": false, 00:05:16.998 "copy": true, 00:05:16.998 "nvme_iov_md": false 00:05:16.998 }, 00:05:16.998 "memory_domains": [ 00:05:16.998 { 00:05:16.998 "dma_device_id": "system", 00:05:16.998 "dma_device_type": 1 00:05:16.998 }, 00:05:16.998 { 00:05:16.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.998 "dma_device_type": 2 00:05:16.998 } 00:05:16.998 ], 00:05:16.998 "driver_specific": {} 00:05:16.998 }, 00:05:16.998 { 00:05:16.998 "name": "Passthru0", 00:05:16.998 "aliases": [ 00:05:16.998 "154312b8-a8f0-5b1e-ad11-8198869a2f09" 00:05:16.998 ], 00:05:16.998 "product_name": "passthru", 00:05:16.998 "block_size": 512, 00:05:16.998 "num_blocks": 16384, 00:05:16.998 "uuid": "154312b8-a8f0-5b1e-ad11-8198869a2f09", 00:05:16.998 "assigned_rate_limits": { 00:05:16.998 "rw_ios_per_sec": 0, 00:05:16.998 "rw_mbytes_per_sec": 0, 00:05:16.998 "r_mbytes_per_sec": 0, 00:05:16.998 "w_mbytes_per_sec": 0 00:05:16.998 }, 00:05:16.998 "claimed": false, 00:05:16.998 "zoned": false, 00:05:16.998 "supported_io_types": { 00:05:16.998 "read": true, 00:05:16.998 "write": true, 00:05:16.998 "unmap": true, 00:05:16.998 "flush": true, 00:05:16.998 "reset": true, 00:05:16.998 "nvme_admin": false, 00:05:16.998 "nvme_io": false, 00:05:16.998 "nvme_io_md": false, 00:05:16.998 "write_zeroes": true, 00:05:16.998 "zcopy": true, 00:05:16.998 "get_zone_info": false, 00:05:16.998 "zone_management": false, 00:05:16.998 "zone_append": false, 00:05:16.998 "compare": false, 00:05:16.998 "compare_and_write": false, 00:05:16.998 "abort": true, 00:05:16.998 "seek_hole": false, 00:05:16.998 "seek_data": false, 00:05:16.998 "copy": true, 00:05:16.998 "nvme_iov_md": false 00:05:16.998 }, 00:05:16.998 "memory_domains": [ 00:05:16.998 { 00:05:16.998 "dma_device_id": "system", 00:05:16.998 "dma_device_type": 1 00:05:16.998 }, 00:05:16.998 { 00:05:16.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.998 "dma_device_type": 2 00:05:16.998 } 00:05:16.998 ], 00:05:16.998 "driver_specific": { 00:05:16.998 "passthru": { 00:05:16.998 "name": "Passthru0", 00:05:16.998 "base_bdev_name": "Malloc0" 00:05:16.998 } 00:05:16.998 } 00:05:16.998 } 00:05:16.998 ]' 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.998 15:42:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.998 00:05:16.998 real 0m0.224s 00:05:16.998 user 0m0.148s 00:05:16.998 sys 0m0.015s 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.998 15:42:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.998 ************************************ 00:05:16.998 END TEST rpc_integrity 00:05:16.998 ************************************ 00:05:16.998 15:42:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.998 15:42:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:16.998 15:42:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.998 15:42:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.998 15:42:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.998 ************************************ 00:05:16.998 START TEST rpc_plugins 00:05:16.998 ************************************ 00:05:16.998 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:16.998 15:42:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:16.998 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.998 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.998 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.998 15:42:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:16.998 15:42:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:16.998 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.998 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.998 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.998 15:42:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:16.998 { 00:05:16.998 "name": "Malloc1", 00:05:16.998 "aliases": [ 00:05:16.998 "885c0bee-b6c5-41b6-8ed9-ad44f5199854" 00:05:16.998 ], 00:05:16.998 "product_name": "Malloc disk", 00:05:16.998 "block_size": 4096, 00:05:16.998 "num_blocks": 256, 00:05:16.998 "uuid": "885c0bee-b6c5-41b6-8ed9-ad44f5199854", 00:05:16.998 "assigned_rate_limits": { 00:05:16.998 "rw_ios_per_sec": 0, 00:05:16.998 "rw_mbytes_per_sec": 0, 00:05:16.998 "r_mbytes_per_sec": 0, 00:05:16.998 "w_mbytes_per_sec": 0 00:05:16.998 }, 00:05:16.998 "claimed": false, 00:05:16.998 "zoned": false, 00:05:16.998 "supported_io_types": { 00:05:16.998 "read": true, 00:05:16.998 "write": true, 00:05:16.998 "unmap": true, 00:05:16.998 "flush": true, 00:05:16.998 "reset": true, 00:05:16.998 "nvme_admin": false, 00:05:16.998 "nvme_io": false, 00:05:16.998 "nvme_io_md": false, 00:05:16.998 "write_zeroes": true, 00:05:16.998 "zcopy": true, 00:05:16.998 "get_zone_info": false, 00:05:16.998 "zone_management": false, 00:05:16.998 "zone_append": false, 00:05:16.998 "compare": false, 00:05:16.998 "compare_and_write": false, 00:05:16.998 "abort": true, 00:05:16.998 "seek_hole": false, 00:05:16.998 "seek_data": false, 00:05:16.998 "copy": true, 00:05:16.998 "nvme_iov_md": false 00:05:16.998 }, 00:05:16.998 "memory_domains": [ 00:05:16.998 { 00:05:16.998 "dma_device_id": "system", 00:05:16.998 "dma_device_type": 1 00:05:16.998 }, 00:05:16.998 { 00:05:16.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.998 "dma_device_type": 2 00:05:16.998 } 00:05:16.998 ], 00:05:16.998 "driver_specific": {} 00:05:16.998 } 00:05:16.998 ]' 00:05:16.998 15:42:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:17.256 15:42:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:17.256 15:42:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:17.256 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.256 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.256 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.256 15:42:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:17.256 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.256 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.256 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.256 15:42:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:17.256 15:42:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:17.256 15:42:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:17.256 00:05:17.257 real 0m0.104s 00:05:17.257 user 0m0.067s 00:05:17.257 sys 0m0.010s 00:05:17.257 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.257 15:42:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:17.257 ************************************ 00:05:17.257 END TEST rpc_plugins 00:05:17.257 ************************************ 00:05:17.257 15:42:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.257 15:42:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:17.257 15:42:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.257 15:42:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.257 15:42:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.257 ************************************ 00:05:17.257 START TEST rpc_trace_cmd_test 00:05:17.257 ************************************ 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:17.257 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid633593", 00:05:17.257 "tpoint_group_mask": "0x8", 00:05:17.257 "iscsi_conn": { 00:05:17.257 "mask": "0x2", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 }, 00:05:17.257 "scsi": { 00:05:17.257 "mask": "0x4", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 }, 00:05:17.257 "bdev": { 00:05:17.257 "mask": "0x8", 00:05:17.257 "tpoint_mask": "0xffffffffffffffff" 00:05:17.257 }, 00:05:17.257 "nvmf_rdma": { 00:05:17.257 "mask": "0x10", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 }, 00:05:17.257 "nvmf_tcp": { 00:05:17.257 "mask": "0x20", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 }, 00:05:17.257 "ftl": { 00:05:17.257 "mask": "0x40", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 }, 00:05:17.257 "blobfs": { 00:05:17.257 "mask": "0x80", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 }, 00:05:17.257 "dsa": { 00:05:17.257 "mask": "0x200", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 }, 00:05:17.257 "thread": { 00:05:17.257 "mask": "0x400", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 }, 00:05:17.257 "nvme_pcie": { 00:05:17.257 "mask": "0x800", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 }, 00:05:17.257 "iaa": { 00:05:17.257 "mask": "0x1000", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 }, 00:05:17.257 "nvme_tcp": { 00:05:17.257 "mask": "0x2000", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 }, 00:05:17.257 "bdev_nvme": { 00:05:17.257 "mask": "0x4000", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 }, 00:05:17.257 "sock": { 00:05:17.257 "mask": "0x8000", 00:05:17.257 "tpoint_mask": "0x0" 00:05:17.257 } 00:05:17.257 }' 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:17.257 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:17.516 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:17.516 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:17.516 15:42:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:17.516 00:05:17.516 real 0m0.193s 00:05:17.516 user 0m0.173s 00:05:17.516 sys 0m0.013s 00:05:17.516 15:42:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.516 15:42:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:17.516 ************************************ 00:05:17.516 END TEST rpc_trace_cmd_test 00:05:17.516 ************************************ 00:05:17.516 15:42:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.516 15:42:14 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:17.516 15:42:14 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:17.516 15:42:14 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:17.516 15:42:14 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.516 15:42:14 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.516 15:42:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.516 ************************************ 00:05:17.516 START TEST rpc_daemon_integrity 00:05:17.516 ************************************ 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:17.516 { 00:05:17.516 "name": "Malloc2", 00:05:17.516 "aliases": [ 00:05:17.516 "f050d078-1863-447f-8f9f-9de0148c1a49" 00:05:17.516 ], 00:05:17.516 "product_name": "Malloc disk", 00:05:17.516 "block_size": 512, 00:05:17.516 "num_blocks": 16384, 00:05:17.516 "uuid": "f050d078-1863-447f-8f9f-9de0148c1a49", 00:05:17.516 "assigned_rate_limits": { 00:05:17.516 "rw_ios_per_sec": 0, 00:05:17.516 "rw_mbytes_per_sec": 0, 00:05:17.516 "r_mbytes_per_sec": 0, 00:05:17.516 "w_mbytes_per_sec": 0 00:05:17.516 }, 00:05:17.516 "claimed": false, 00:05:17.516 "zoned": false, 00:05:17.516 "supported_io_types": { 00:05:17.516 "read": true, 00:05:17.516 "write": true, 00:05:17.516 "unmap": true, 00:05:17.516 "flush": true, 00:05:17.516 "reset": true, 00:05:17.516 "nvme_admin": false, 00:05:17.516 "nvme_io": false, 00:05:17.516 "nvme_io_md": false, 00:05:17.516 "write_zeroes": true, 00:05:17.516 "zcopy": true, 00:05:17.516 "get_zone_info": false, 00:05:17.516 "zone_management": false, 00:05:17.516 "zone_append": false, 00:05:17.516 "compare": false, 00:05:17.516 "compare_and_write": false, 00:05:17.516 "abort": true, 00:05:17.516 "seek_hole": false, 00:05:17.516 "seek_data": false, 00:05:17.516 "copy": true, 00:05:17.516 "nvme_iov_md": false 00:05:17.516 }, 00:05:17.516 "memory_domains": [ 00:05:17.516 { 00:05:17.516 "dma_device_id": "system", 00:05:17.516 "dma_device_type": 1 00:05:17.516 }, 00:05:17.516 { 00:05:17.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.516 "dma_device_type": 2 00:05:17.516 } 00:05:17.516 ], 00:05:17.516 "driver_specific": {} 00:05:17.516 } 00:05:17.516 ]' 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.516 [2024-07-12 15:42:14.743043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:17.516 [2024-07-12 15:42:14.743085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.516 [2024-07-12 15:42:14.743105] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdc8610 00:05:17.516 [2024-07-12 15:42:14.743117] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.516 [2024-07-12 15:42:14.744225] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.516 [2024-07-12 15:42:14.744249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:17.516 Passthru0 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.516 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:17.516 { 00:05:17.516 "name": "Malloc2", 00:05:17.516 "aliases": [ 00:05:17.516 "f050d078-1863-447f-8f9f-9de0148c1a49" 00:05:17.516 ], 00:05:17.516 "product_name": "Malloc disk", 00:05:17.516 "block_size": 512, 00:05:17.516 "num_blocks": 16384, 00:05:17.516 "uuid": "f050d078-1863-447f-8f9f-9de0148c1a49", 00:05:17.516 "assigned_rate_limits": { 00:05:17.516 "rw_ios_per_sec": 0, 00:05:17.516 "rw_mbytes_per_sec": 0, 00:05:17.516 "r_mbytes_per_sec": 0, 00:05:17.516 "w_mbytes_per_sec": 0 00:05:17.516 }, 00:05:17.516 "claimed": true, 00:05:17.516 "claim_type": "exclusive_write", 00:05:17.516 "zoned": false, 00:05:17.516 "supported_io_types": { 00:05:17.516 "read": true, 00:05:17.516 "write": true, 00:05:17.516 "unmap": true, 00:05:17.516 "flush": true, 00:05:17.516 "reset": true, 00:05:17.516 "nvme_admin": false, 00:05:17.516 "nvme_io": false, 00:05:17.516 "nvme_io_md": false, 00:05:17.516 "write_zeroes": true, 00:05:17.516 "zcopy": true, 00:05:17.516 "get_zone_info": false, 00:05:17.516 "zone_management": false, 00:05:17.516 "zone_append": false, 00:05:17.516 "compare": false, 00:05:17.516 "compare_and_write": false, 00:05:17.516 "abort": true, 00:05:17.516 "seek_hole": false, 00:05:17.516 "seek_data": false, 00:05:17.516 "copy": true, 00:05:17.516 "nvme_iov_md": false 00:05:17.516 }, 00:05:17.516 "memory_domains": [ 00:05:17.516 { 00:05:17.516 "dma_device_id": "system", 00:05:17.516 "dma_device_type": 1 00:05:17.516 }, 00:05:17.516 { 00:05:17.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.516 "dma_device_type": 2 00:05:17.516 } 00:05:17.516 ], 00:05:17.516 "driver_specific": {} 00:05:17.516 }, 00:05:17.516 { 00:05:17.516 "name": "Passthru0", 00:05:17.516 "aliases": [ 00:05:17.516 "29d1b8b4-a0bc-54d3-b8b0-431e5d34a557" 00:05:17.516 ], 00:05:17.516 "product_name": "passthru", 00:05:17.516 "block_size": 512, 00:05:17.516 "num_blocks": 16384, 00:05:17.516 "uuid": "29d1b8b4-a0bc-54d3-b8b0-431e5d34a557", 00:05:17.516 "assigned_rate_limits": { 00:05:17.516 "rw_ios_per_sec": 0, 00:05:17.516 "rw_mbytes_per_sec": 0, 00:05:17.516 "r_mbytes_per_sec": 0, 00:05:17.516 "w_mbytes_per_sec": 0 00:05:17.516 }, 00:05:17.516 "claimed": false, 00:05:17.516 "zoned": false, 00:05:17.516 "supported_io_types": { 00:05:17.516 "read": true, 00:05:17.516 "write": true, 00:05:17.516 "unmap": true, 00:05:17.516 "flush": true, 00:05:17.516 "reset": true, 00:05:17.516 "nvme_admin": false, 00:05:17.517 "nvme_io": false, 00:05:17.517 "nvme_io_md": false, 00:05:17.517 "write_zeroes": true, 00:05:17.517 "zcopy": true, 00:05:17.517 "get_zone_info": false, 00:05:17.517 "zone_management": false, 00:05:17.517 "zone_append": false, 00:05:17.517 "compare": false, 00:05:17.517 "compare_and_write": false, 00:05:17.517 "abort": true, 00:05:17.517 "seek_hole": false, 00:05:17.517 "seek_data": false, 00:05:17.517 "copy": true, 00:05:17.517 "nvme_iov_md": false 00:05:17.517 }, 00:05:17.517 "memory_domains": [ 00:05:17.517 { 00:05:17.517 "dma_device_id": "system", 00:05:17.517 "dma_device_type": 1 00:05:17.517 }, 00:05:17.517 { 00:05:17.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:17.517 "dma_device_type": 2 00:05:17.517 } 00:05:17.517 ], 00:05:17.517 "driver_specific": { 00:05:17.517 "passthru": { 00:05:17.517 "name": "Passthru0", 00:05:17.517 "base_bdev_name": "Malloc2" 00:05:17.517 } 00:05:17.517 } 00:05:17.517 } 00:05:17.517 ]' 00:05:17.517 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:17.517 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:17.517 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:17.517 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.517 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.517 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.517 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:17.517 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.517 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.775 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.775 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:17.775 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.775 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.775 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.775 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:17.775 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:17.775 15:42:14 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:17.775 00:05:17.775 real 0m0.213s 00:05:17.775 user 0m0.141s 00:05:17.775 sys 0m0.017s 00:05:17.775 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.775 15:42:14 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:17.775 ************************************ 00:05:17.775 END TEST rpc_daemon_integrity 00:05:17.775 ************************************ 00:05:17.775 15:42:14 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:17.775 15:42:14 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:17.775 15:42:14 rpc -- rpc/rpc.sh@84 -- # killprocess 633593 00:05:17.775 15:42:14 rpc -- common/autotest_common.sh@948 -- # '[' -z 633593 ']' 00:05:17.775 15:42:14 rpc -- common/autotest_common.sh@952 -- # kill -0 633593 00:05:17.775 15:42:14 rpc -- common/autotest_common.sh@953 -- # uname 00:05:17.775 15:42:14 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.775 15:42:14 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 633593 00:05:17.775 15:42:14 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.775 15:42:14 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.775 15:42:14 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 633593' 00:05:17.775 killing process with pid 633593 00:05:17.775 15:42:14 rpc -- common/autotest_common.sh@967 -- # kill 633593 00:05:17.775 15:42:14 rpc -- common/autotest_common.sh@972 -- # wait 633593 00:05:18.366 00:05:18.366 real 0m1.900s 00:05:18.366 user 0m2.352s 00:05:18.366 sys 0m0.576s 00:05:18.366 15:42:15 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.366 15:42:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.366 ************************************ 00:05:18.366 END TEST rpc 00:05:18.366 ************************************ 00:05:18.366 15:42:15 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.366 15:42:15 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:18.366 15:42:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.366 15:42:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.366 15:42:15 -- common/autotest_common.sh@10 -- # set +x 00:05:18.366 ************************************ 00:05:18.366 START TEST skip_rpc 00:05:18.366 ************************************ 00:05:18.366 15:42:15 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:18.366 * Looking for test storage... 00:05:18.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:18.366 15:42:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:18.366 15:42:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:18.366 15:42:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:18.366 15:42:15 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.366 15:42:15 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.366 15:42:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.366 ************************************ 00:05:18.366 START TEST skip_rpc 00:05:18.366 ************************************ 00:05:18.366 15:42:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:18.366 15:42:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=634020 00:05:18.366 15:42:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:18.366 15:42:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.366 15:42:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:18.366 [2024-07-12 15:42:15.513658] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:18.366 [2024-07-12 15:42:15.513768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634020 ] 00:05:18.366 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.366 [2024-07-12 15:42:15.573318] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.655 [2024-07-12 15:42:15.687772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 634020 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 634020 ']' 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 634020 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 634020 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 634020' 00:05:23.923 killing process with pid 634020 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 634020 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 634020 00:05:23.923 00:05:23.923 real 0m5.474s 00:05:23.923 user 0m5.173s 00:05:23.923 sys 0m0.306s 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.923 15:42:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.923 ************************************ 00:05:23.923 END TEST skip_rpc 00:05:23.923 ************************************ 00:05:23.923 15:42:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:23.923 15:42:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:23.923 15:42:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.923 15:42:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.923 15:42:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.923 ************************************ 00:05:23.923 START TEST skip_rpc_with_json 00:05:23.923 ************************************ 00:05:23.923 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:23.923 15:42:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:23.923 15:42:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=634601 00:05:23.923 15:42:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.923 15:42:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.923 15:42:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 634601 00:05:23.923 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 634601 ']' 00:05:23.923 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.923 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.923 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.923 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.923 15:42:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.923 [2024-07-12 15:42:21.047828] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:23.923 [2024-07-12 15:42:21.047924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid634601 ] 00:05:23.923 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.923 [2024-07-12 15:42:21.106479] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.180 [2024-07-12 15:42:21.217084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.180 [2024-07-12 15:42:21.460856] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:24.180 request: 00:05:24.180 { 00:05:24.180 "trtype": "tcp", 00:05:24.180 "method": "nvmf_get_transports", 00:05:24.180 "req_id": 1 00:05:24.180 } 00:05:24.180 Got JSON-RPC error response 00:05:24.180 response: 00:05:24.180 { 00:05:24.180 "code": -19, 00:05:24.180 "message": "No such device" 00:05:24.180 } 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.180 [2024-07-12 15:42:21.468967] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.180 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.451 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.451 15:42:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.451 { 00:05:24.451 "subsystems": [ 00:05:24.451 { 00:05:24.451 "subsystem": "vfio_user_target", 00:05:24.451 "config": null 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "keyring", 00:05:24.451 "config": [] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "iobuf", 00:05:24.451 "config": [ 00:05:24.451 { 00:05:24.451 "method": "iobuf_set_options", 00:05:24.451 "params": { 00:05:24.451 "small_pool_count": 8192, 00:05:24.451 "large_pool_count": 1024, 00:05:24.451 "small_bufsize": 8192, 00:05:24.451 "large_bufsize": 135168 00:05:24.451 } 00:05:24.451 } 00:05:24.451 ] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "sock", 00:05:24.451 "config": [ 00:05:24.451 { 00:05:24.451 "method": "sock_set_default_impl", 00:05:24.451 "params": { 00:05:24.451 "impl_name": "posix" 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "sock_impl_set_options", 00:05:24.451 "params": { 00:05:24.451 "impl_name": "ssl", 00:05:24.451 "recv_buf_size": 4096, 00:05:24.451 "send_buf_size": 4096, 00:05:24.451 "enable_recv_pipe": true, 00:05:24.451 "enable_quickack": false, 00:05:24.451 "enable_placement_id": 0, 00:05:24.451 "enable_zerocopy_send_server": true, 00:05:24.451 "enable_zerocopy_send_client": false, 00:05:24.451 "zerocopy_threshold": 0, 00:05:24.451 "tls_version": 0, 00:05:24.451 "enable_ktls": false 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "sock_impl_set_options", 00:05:24.451 "params": { 00:05:24.451 "impl_name": "posix", 00:05:24.451 "recv_buf_size": 2097152, 00:05:24.451 "send_buf_size": 2097152, 00:05:24.451 "enable_recv_pipe": true, 00:05:24.451 "enable_quickack": false, 00:05:24.451 "enable_placement_id": 0, 00:05:24.451 "enable_zerocopy_send_server": true, 00:05:24.451 "enable_zerocopy_send_client": false, 00:05:24.451 "zerocopy_threshold": 0, 00:05:24.451 "tls_version": 0, 00:05:24.451 "enable_ktls": false 00:05:24.451 } 00:05:24.451 } 00:05:24.451 ] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "vmd", 00:05:24.451 "config": [] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "accel", 00:05:24.451 "config": [ 00:05:24.451 { 00:05:24.451 "method": "accel_set_options", 00:05:24.451 "params": { 00:05:24.451 "small_cache_size": 128, 00:05:24.451 "large_cache_size": 16, 00:05:24.451 "task_count": 2048, 00:05:24.451 "sequence_count": 2048, 00:05:24.451 "buf_count": 2048 00:05:24.451 } 00:05:24.451 } 00:05:24.451 ] 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "subsystem": "bdev", 00:05:24.451 "config": [ 00:05:24.451 { 00:05:24.451 "method": "bdev_set_options", 00:05:24.451 "params": { 00:05:24.451 "bdev_io_pool_size": 65535, 00:05:24.451 "bdev_io_cache_size": 256, 00:05:24.451 "bdev_auto_examine": true, 00:05:24.451 "iobuf_small_cache_size": 128, 00:05:24.451 "iobuf_large_cache_size": 16 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "bdev_raid_set_options", 00:05:24.451 "params": { 00:05:24.451 "process_window_size_kb": 1024 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "bdev_iscsi_set_options", 00:05:24.451 "params": { 00:05:24.451 "timeout_sec": 30 00:05:24.451 } 00:05:24.451 }, 00:05:24.451 { 00:05:24.451 "method": "bdev_nvme_set_options", 00:05:24.451 "params": { 00:05:24.451 "action_on_timeout": "none", 00:05:24.451 "timeout_us": 0, 00:05:24.451 "timeout_admin_us": 0, 00:05:24.451 "keep_alive_timeout_ms": 10000, 00:05:24.451 "arbitration_burst": 0, 00:05:24.451 "low_priority_weight": 0, 00:05:24.451 "medium_priority_weight": 0, 00:05:24.451 "high_priority_weight": 0, 00:05:24.451 "nvme_adminq_poll_period_us": 10000, 00:05:24.451 "nvme_ioq_poll_period_us": 0, 00:05:24.452 "io_queue_requests": 0, 00:05:24.452 "delay_cmd_submit": true, 00:05:24.452 "transport_retry_count": 4, 00:05:24.452 "bdev_retry_count": 3, 00:05:24.452 "transport_ack_timeout": 0, 00:05:24.452 "ctrlr_loss_timeout_sec": 0, 00:05:24.452 "reconnect_delay_sec": 0, 00:05:24.452 "fast_io_fail_timeout_sec": 0, 00:05:24.452 "disable_auto_failback": false, 00:05:24.452 "generate_uuids": false, 00:05:24.452 "transport_tos": 0, 00:05:24.452 "nvme_error_stat": false, 00:05:24.452 "rdma_srq_size": 0, 00:05:24.452 "io_path_stat": false, 00:05:24.452 "allow_accel_sequence": false, 00:05:24.452 "rdma_max_cq_size": 0, 00:05:24.452 "rdma_cm_event_timeout_ms": 0, 00:05:24.452 "dhchap_digests": [ 00:05:24.452 "sha256", 00:05:24.452 "sha384", 00:05:24.452 "sha512" 00:05:24.452 ], 00:05:24.452 "dhchap_dhgroups": [ 00:05:24.452 "null", 00:05:24.452 "ffdhe2048", 00:05:24.452 "ffdhe3072", 00:05:24.452 "ffdhe4096", 00:05:24.452 "ffdhe6144", 00:05:24.452 "ffdhe8192" 00:05:24.452 ] 00:05:24.452 } 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "method": "bdev_nvme_set_hotplug", 00:05:24.452 "params": { 00:05:24.452 "period_us": 100000, 00:05:24.452 "enable": false 00:05:24.452 } 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "method": "bdev_wait_for_examine" 00:05:24.452 } 00:05:24.452 ] 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "subsystem": "scsi", 00:05:24.452 "config": null 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "subsystem": "scheduler", 00:05:24.452 "config": [ 00:05:24.452 { 00:05:24.452 "method": "framework_set_scheduler", 00:05:24.452 "params": { 00:05:24.452 "name": "static" 00:05:24.452 } 00:05:24.452 } 00:05:24.452 ] 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "subsystem": "vhost_scsi", 00:05:24.452 "config": [] 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "subsystem": "vhost_blk", 00:05:24.452 "config": [] 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "subsystem": "ublk", 00:05:24.452 "config": [] 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "subsystem": "nbd", 00:05:24.452 "config": [] 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "subsystem": "nvmf", 00:05:24.452 "config": [ 00:05:24.452 { 00:05:24.452 "method": "nvmf_set_config", 00:05:24.452 "params": { 00:05:24.452 "discovery_filter": "match_any", 00:05:24.452 "admin_cmd_passthru": { 00:05:24.452 "identify_ctrlr": false 00:05:24.452 } 00:05:24.452 } 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "method": "nvmf_set_max_subsystems", 00:05:24.452 "params": { 00:05:24.452 "max_subsystems": 1024 00:05:24.452 } 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "method": "nvmf_set_crdt", 00:05:24.452 "params": { 00:05:24.452 "crdt1": 0, 00:05:24.452 "crdt2": 0, 00:05:24.452 "crdt3": 0 00:05:24.452 } 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "method": "nvmf_create_transport", 00:05:24.452 "params": { 00:05:24.452 "trtype": "TCP", 00:05:24.452 "max_queue_depth": 128, 00:05:24.452 "max_io_qpairs_per_ctrlr": 127, 00:05:24.452 "in_capsule_data_size": 4096, 00:05:24.452 "max_io_size": 131072, 00:05:24.452 "io_unit_size": 131072, 00:05:24.452 "max_aq_depth": 128, 00:05:24.452 "num_shared_buffers": 511, 00:05:24.452 "buf_cache_size": 4294967295, 00:05:24.452 "dif_insert_or_strip": false, 00:05:24.452 "zcopy": false, 00:05:24.452 "c2h_success": true, 00:05:24.452 "sock_priority": 0, 00:05:24.452 "abort_timeout_sec": 1, 00:05:24.452 "ack_timeout": 0, 00:05:24.452 "data_wr_pool_size": 0 00:05:24.452 } 00:05:24.452 } 00:05:24.452 ] 00:05:24.452 }, 00:05:24.452 { 00:05:24.452 "subsystem": "iscsi", 00:05:24.452 "config": [ 00:05:24.452 { 00:05:24.452 "method": "iscsi_set_options", 00:05:24.452 "params": { 00:05:24.452 "node_base": "iqn.2016-06.io.spdk", 00:05:24.452 "max_sessions": 128, 00:05:24.452 "max_connections_per_session": 2, 00:05:24.452 "max_queue_depth": 64, 00:05:24.452 "default_time2wait": 2, 00:05:24.452 "default_time2retain": 20, 00:05:24.452 "first_burst_length": 8192, 00:05:24.452 "immediate_data": true, 00:05:24.452 "allow_duplicated_isid": false, 00:05:24.452 "error_recovery_level": 0, 00:05:24.452 "nop_timeout": 60, 00:05:24.452 "nop_in_interval": 30, 00:05:24.452 "disable_chap": false, 00:05:24.452 "require_chap": false, 00:05:24.452 "mutual_chap": false, 00:05:24.452 "chap_group": 0, 00:05:24.452 "max_large_datain_per_connection": 64, 00:05:24.452 "max_r2t_per_connection": 4, 00:05:24.452 "pdu_pool_size": 36864, 00:05:24.452 "immediate_data_pool_size": 16384, 00:05:24.452 "data_out_pool_size": 2048 00:05:24.452 } 00:05:24.452 } 00:05:24.452 ] 00:05:24.452 } 00:05:24.452 ] 00:05:24.452 } 00:05:24.452 15:42:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:24.452 15:42:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 634601 00:05:24.452 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 634601 ']' 00:05:24.452 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 634601 00:05:24.452 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:24.452 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:24.452 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 634601 00:05:24.452 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:24.452 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:24.452 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 634601' 00:05:24.452 killing process with pid 634601 00:05:24.452 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 634601 00:05:24.452 15:42:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 634601 00:05:25.016 15:42:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=634741 00:05:25.016 15:42:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:25.016 15:42:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:30.277 15:42:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 634741 00:05:30.277 15:42:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 634741 ']' 00:05:30.277 15:42:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 634741 00:05:30.277 15:42:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:30.277 15:42:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.277 15:42:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 634741 00:05:30.277 15:42:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.278 15:42:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.278 15:42:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 634741' 00:05:30.278 killing process with pid 634741 00:05:30.278 15:42:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 634741 00:05:30.278 15:42:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 634741 00:05:30.278 15:42:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:30.278 15:42:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:30.278 00:05:30.278 real 0m6.574s 00:05:30.278 user 0m6.197s 00:05:30.278 sys 0m0.671s 00:05:30.278 15:42:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.278 15:42:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.278 ************************************ 00:05:30.278 END TEST skip_rpc_with_json 00:05:30.278 ************************************ 00:05:30.537 15:42:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.537 15:42:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:30.537 15:42:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.537 15:42:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.537 15:42:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.537 ************************************ 00:05:30.537 START TEST skip_rpc_with_delay 00:05:30.537 ************************************ 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.537 [2024-07-12 15:42:27.676705] app.c: 836:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:30.537 [2024-07-12 15:42:27.676835] app.c: 715:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.537 00:05:30.537 real 0m0.070s 00:05:30.537 user 0m0.044s 00:05:30.537 sys 0m0.025s 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.537 15:42:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:30.537 ************************************ 00:05:30.537 END TEST skip_rpc_with_delay 00:05:30.537 ************************************ 00:05:30.537 15:42:27 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.537 15:42:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:30.537 15:42:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:30.537 15:42:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:30.537 15:42:27 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.537 15:42:27 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.537 15:42:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.537 ************************************ 00:05:30.537 START TEST exit_on_failed_rpc_init 00:05:30.537 ************************************ 00:05:30.537 15:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:30.537 15:42:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=635474 00:05:30.537 15:42:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.537 15:42:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 635474 00:05:30.537 15:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 635474 ']' 00:05:30.537 15:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.537 15:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.537 15:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.537 15:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.537 15:42:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.537 [2024-07-12 15:42:27.794353] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:30.537 [2024-07-12 15:42:27.794430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635474 ] 00:05:30.537 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.796 [2024-07-12 15:42:27.854503] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.796 [2024-07-12 15:42:27.958653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:31.054 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.054 [2024-07-12 15:42:28.251995] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:31.054 [2024-07-12 15:42:28.252088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635590 ] 00:05:31.054 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.054 [2024-07-12 15:42:28.309165] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.313 [2024-07-12 15:42:28.420838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.313 [2024-07-12 15:42:28.420963] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:31.313 [2024-07-12 15:42:28.420981] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:31.313 [2024-07-12 15:42:28.420992] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 635474 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 635474 ']' 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 635474 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 635474 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 635474' 00:05:31.313 killing process with pid 635474 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 635474 00:05:31.313 15:42:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 635474 00:05:31.878 00:05:31.878 real 0m1.273s 00:05:31.878 user 0m1.441s 00:05:31.878 sys 0m0.436s 00:05:31.878 15:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.878 15:42:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.878 ************************************ 00:05:31.878 END TEST exit_on_failed_rpc_init 00:05:31.878 ************************************ 00:05:31.878 15:42:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:31.878 15:42:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:31.878 00:05:31.878 real 0m13.652s 00:05:31.878 user 0m12.961s 00:05:31.878 sys 0m1.608s 00:05:31.878 15:42:29 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.878 15:42:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.878 ************************************ 00:05:31.878 END TEST skip_rpc 00:05:31.878 ************************************ 00:05:31.878 15:42:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:31.878 15:42:29 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.878 15:42:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.878 15:42:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.878 15:42:29 -- common/autotest_common.sh@10 -- # set +x 00:05:31.878 ************************************ 00:05:31.878 START TEST rpc_client 00:05:31.878 ************************************ 00:05:31.878 15:42:29 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:31.878 * Looking for test storage... 00:05:31.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:31.878 15:42:29 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:31.878 OK 00:05:31.878 15:42:29 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:31.878 00:05:31.878 real 0m0.070s 00:05:31.878 user 0m0.035s 00:05:31.878 sys 0m0.040s 00:05:31.878 15:42:29 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.878 15:42:29 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:31.878 ************************************ 00:05:31.878 END TEST rpc_client 00:05:31.878 ************************************ 00:05:32.136 15:42:29 -- common/autotest_common.sh@1142 -- # return 0 00:05:32.136 15:42:29 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:32.136 15:42:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.136 15:42:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.136 15:42:29 -- common/autotest_common.sh@10 -- # set +x 00:05:32.136 ************************************ 00:05:32.136 START TEST json_config 00:05:32.136 ************************************ 00:05:32.136 15:42:29 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:32.136 15:42:29 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.136 15:42:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:32.136 15:42:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.136 15:42:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.136 15:42:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.136 15:42:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:32.137 15:42:29 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.137 15:42:29 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.137 15:42:29 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.137 15:42:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.137 15:42:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.137 15:42:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.137 15:42:29 json_config -- paths/export.sh@5 -- # export PATH 00:05:32.137 15:42:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@47 -- # : 0 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:32.137 15:42:29 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:32.137 INFO: JSON configuration test init 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:32.137 15:42:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.137 15:42:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:32.137 15:42:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.137 15:42:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.137 15:42:29 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:32.137 15:42:29 json_config -- json_config/common.sh@9 -- # local app=target 00:05:32.137 15:42:29 json_config -- json_config/common.sh@10 -- # shift 00:05:32.137 15:42:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:32.137 15:42:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:32.137 15:42:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:32.137 15:42:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.137 15:42:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.137 15:42:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=635828 00:05:32.137 15:42:29 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:32.137 15:42:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:32.137 Waiting for target to run... 00:05:32.137 15:42:29 json_config -- json_config/common.sh@25 -- # waitforlisten 635828 /var/tmp/spdk_tgt.sock 00:05:32.137 15:42:29 json_config -- common/autotest_common.sh@829 -- # '[' -z 635828 ']' 00:05:32.137 15:42:29 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.137 15:42:29 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.137 15:42:29 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.137 15:42:29 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.137 15:42:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.137 [2024-07-12 15:42:29.303089] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:32.137 [2024-07-12 15:42:29.303190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid635828 ] 00:05:32.137 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.396 [2024-07-12 15:42:29.634965] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.653 [2024-07-12 15:42:29.713049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.219 15:42:30 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.219 15:42:30 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:33.219 15:42:30 json_config -- json_config/common.sh@26 -- # echo '' 00:05:33.219 00:05:33.219 15:42:30 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:33.219 15:42:30 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:33.219 15:42:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:33.219 15:42:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.219 15:42:30 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:33.219 15:42:30 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:33.219 15:42:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:33.219 15:42:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.219 15:42:30 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:33.219 15:42:30 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:33.219 15:42:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:36.505 15:42:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:36.505 15:42:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:36.505 15:42:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:36.505 15:42:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:36.505 15:42:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:36.505 15:42:33 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:36.505 15:42:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:36.505 15:42:33 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:36.505 15:42:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:36.764 MallocForNvmf0 00:05:36.764 15:42:33 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:36.764 15:42:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:37.021 MallocForNvmf1 00:05:37.021 15:42:34 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.021 15:42:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:37.280 [2024-07-12 15:42:34.394075] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.280 15:42:34 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.280 15:42:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:37.537 15:42:34 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:37.537 15:42:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:37.794 15:42:34 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:37.794 15:42:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:38.052 15:42:35 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.052 15:42:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:38.310 [2024-07-12 15:42:35.365195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.310 15:42:35 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:38.310 15:42:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.310 15:42:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.310 15:42:35 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:38.310 15:42:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.310 15:42:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.310 15:42:35 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:38.310 15:42:35 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.310 15:42:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:38.568 MallocBdevForConfigChangeCheck 00:05:38.568 15:42:35 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:38.568 15:42:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.568 15:42:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.568 15:42:35 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:38.568 15:42:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.826 15:42:36 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:38.826 INFO: shutting down applications... 00:05:38.826 15:42:36 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:38.826 15:42:36 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:38.826 15:42:36 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:38.826 15:42:36 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:40.727 Calling clear_iscsi_subsystem 00:05:40.727 Calling clear_nvmf_subsystem 00:05:40.727 Calling clear_nbd_subsystem 00:05:40.727 Calling clear_ublk_subsystem 00:05:40.727 Calling clear_vhost_blk_subsystem 00:05:40.727 Calling clear_vhost_scsi_subsystem 00:05:40.727 Calling clear_bdev_subsystem 00:05:40.727 15:42:37 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:40.727 15:42:37 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:40.727 15:42:37 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:40.727 15:42:37 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:40.727 15:42:37 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:40.727 15:42:37 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:40.985 15:42:38 json_config -- json_config/json_config.sh@345 -- # break 00:05:40.985 15:42:38 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:40.985 15:42:38 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:40.985 15:42:38 json_config -- json_config/common.sh@31 -- # local app=target 00:05:40.985 15:42:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:40.985 15:42:38 json_config -- json_config/common.sh@35 -- # [[ -n 635828 ]] 00:05:40.985 15:42:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 635828 00:05:40.985 15:42:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:40.985 15:42:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.985 15:42:38 json_config -- json_config/common.sh@41 -- # kill -0 635828 00:05:40.985 15:42:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.553 15:42:38 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.553 15:42:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.553 15:42:38 json_config -- json_config/common.sh@41 -- # kill -0 635828 00:05:41.553 15:42:38 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:41.553 15:42:38 json_config -- json_config/common.sh@43 -- # break 00:05:41.553 15:42:38 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:41.553 15:42:38 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:41.553 SPDK target shutdown done 00:05:41.553 15:42:38 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:41.553 INFO: relaunching applications... 00:05:41.553 15:42:38 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.553 15:42:38 json_config -- json_config/common.sh@9 -- # local app=target 00:05:41.553 15:42:38 json_config -- json_config/common.sh@10 -- # shift 00:05:41.553 15:42:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:41.553 15:42:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:41.553 15:42:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:41.553 15:42:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.553 15:42:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:41.553 15:42:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=637021 00:05:41.553 15:42:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.553 15:42:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:41.553 Waiting for target to run... 00:05:41.553 15:42:38 json_config -- json_config/common.sh@25 -- # waitforlisten 637021 /var/tmp/spdk_tgt.sock 00:05:41.553 15:42:38 json_config -- common/autotest_common.sh@829 -- # '[' -z 637021 ']' 00:05:41.553 15:42:38 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.553 15:42:38 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.553 15:42:38 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.553 15:42:38 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.553 15:42:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.553 [2024-07-12 15:42:38.654688] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:41.553 [2024-07-12 15:42:38.654789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid637021 ] 00:05:41.553 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.119 [2024-07-12 15:42:39.194574] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.119 [2024-07-12 15:42:39.287375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.400 [2024-07-12 15:42:42.319701] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.400 [2024-07-12 15:42:42.352207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:45.964 15:42:43 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.964 15:42:43 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:45.964 15:42:43 json_config -- json_config/common.sh@26 -- # echo '' 00:05:45.964 00:05:45.964 15:42:43 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:45.964 15:42:43 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:45.964 INFO: Checking if target configuration is the same... 00:05:45.964 15:42:43 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.964 15:42:43 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:45.964 15:42:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.964 + '[' 2 -ne 2 ']' 00:05:45.964 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:45.964 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:45.964 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:45.964 +++ basename /dev/fd/62 00:05:45.964 ++ mktemp /tmp/62.XXX 00:05:45.964 + tmp_file_1=/tmp/62.cal 00:05:45.964 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.964 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:45.964 + tmp_file_2=/tmp/spdk_tgt_config.json.Ktb 00:05:45.964 + ret=0 00:05:45.964 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.221 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:46.221 + diff -u /tmp/62.cal /tmp/spdk_tgt_config.json.Ktb 00:05:46.221 + echo 'INFO: JSON config files are the same' 00:05:46.221 INFO: JSON config files are the same 00:05:46.221 + rm /tmp/62.cal /tmp/spdk_tgt_config.json.Ktb 00:05:46.221 + exit 0 00:05:46.221 15:42:43 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:46.221 15:42:43 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:46.221 INFO: changing configuration and checking if this can be detected... 00:05:46.221 15:42:43 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.221 15:42:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:46.477 15:42:43 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.477 15:42:43 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:46.477 15:42:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.477 + '[' 2 -ne 2 ']' 00:05:46.477 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:46.477 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:46.477 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:46.477 +++ basename /dev/fd/62 00:05:46.477 ++ mktemp /tmp/62.XXX 00:05:46.477 + tmp_file_1=/tmp/62.I8K 00:05:46.477 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.477 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:46.477 + tmp_file_2=/tmp/spdk_tgt_config.json.6qZ 00:05:46.477 + ret=0 00:05:46.477 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.041 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:47.041 + diff -u /tmp/62.I8K /tmp/spdk_tgt_config.json.6qZ 00:05:47.041 + ret=1 00:05:47.041 + echo '=== Start of file: /tmp/62.I8K ===' 00:05:47.041 + cat /tmp/62.I8K 00:05:47.041 + echo '=== End of file: /tmp/62.I8K ===' 00:05:47.041 + echo '' 00:05:47.041 + echo '=== Start of file: /tmp/spdk_tgt_config.json.6qZ ===' 00:05:47.041 + cat /tmp/spdk_tgt_config.json.6qZ 00:05:47.041 + echo '=== End of file: /tmp/spdk_tgt_config.json.6qZ ===' 00:05:47.041 + echo '' 00:05:47.041 + rm /tmp/62.I8K /tmp/spdk_tgt_config.json.6qZ 00:05:47.041 + exit 1 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:47.041 INFO: configuration change detected. 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@317 -- # [[ -n 637021 ]] 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:47.041 15:42:44 json_config -- json_config/json_config.sh@323 -- # killprocess 637021 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@948 -- # '[' -z 637021 ']' 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@952 -- # kill -0 637021 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@953 -- # uname 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 637021 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 637021' 00:05:47.041 killing process with pid 637021 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@967 -- # kill 637021 00:05:47.041 15:42:44 json_config -- common/autotest_common.sh@972 -- # wait 637021 00:05:48.935 15:42:45 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.935 15:42:45 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:48.935 15:42:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.935 15:42:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.935 15:42:45 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:48.935 15:42:45 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:48.935 INFO: Success 00:05:48.935 00:05:48.935 real 0m16.767s 00:05:48.935 user 0m18.627s 00:05:48.935 sys 0m2.112s 00:05:48.935 15:42:45 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.935 15:42:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.935 ************************************ 00:05:48.935 END TEST json_config 00:05:48.935 ************************************ 00:05:48.935 15:42:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.935 15:42:45 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.935 15:42:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.935 15:42:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.935 15:42:45 -- common/autotest_common.sh@10 -- # set +x 00:05:48.935 ************************************ 00:05:48.935 START TEST json_config_extra_key 00:05:48.935 ************************************ 00:05:48.935 15:42:46 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:48.935 15:42:46 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.935 15:42:46 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.935 15:42:46 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.935 15:42:46 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.935 15:42:46 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.935 15:42:46 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.935 15:42:46 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:48.935 15:42:46 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:48.935 15:42:46 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:48.935 INFO: launching applications... 00:05:48.935 15:42:46 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:48.935 15:42:46 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:48.935 15:42:46 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:48.935 15:42:46 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:48.935 15:42:46 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:48.935 15:42:46 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:48.935 15:42:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.935 15:42:46 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.935 15:42:46 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=638056 00:05:48.935 15:42:46 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:48.935 15:42:46 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:48.935 Waiting for target to run... 00:05:48.935 15:42:46 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 638056 /var/tmp/spdk_tgt.sock 00:05:48.935 15:42:46 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 638056 ']' 00:05:48.935 15:42:46 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.935 15:42:46 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.935 15:42:46 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.935 15:42:46 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.935 15:42:46 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.935 [2024-07-12 15:42:46.110081] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:48.935 [2024-07-12 15:42:46.110178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid638056 ] 00:05:48.935 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.501 [2024-07-12 15:42:46.642509] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.501 [2024-07-12 15:42:46.737596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.758 15:42:47 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.758 15:42:47 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:49.758 15:42:47 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:49.758 00:05:49.758 15:42:47 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:49.758 INFO: shutting down applications... 00:05:49.758 15:42:47 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:49.758 15:42:47 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:49.758 15:42:47 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:49.758 15:42:47 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 638056 ]] 00:05:49.758 15:42:47 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 638056 00:05:49.758 15:42:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:49.758 15:42:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:49.758 15:42:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 638056 00:05:49.758 15:42:47 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.326 15:42:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.326 15:42:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.326 15:42:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 638056 00:05:50.326 15:42:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:50.326 15:42:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:50.326 15:42:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:50.326 15:42:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:50.326 SPDK target shutdown done 00:05:50.326 15:42:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:50.326 Success 00:05:50.326 00:05:50.326 real 0m1.545s 00:05:50.326 user 0m1.337s 00:05:50.326 sys 0m0.630s 00:05:50.326 15:42:47 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.326 15:42:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:50.326 ************************************ 00:05:50.326 END TEST json_config_extra_key 00:05:50.326 ************************************ 00:05:50.326 15:42:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.326 15:42:47 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.326 15:42:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.326 15:42:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.326 15:42:47 -- common/autotest_common.sh@10 -- # set +x 00:05:50.326 ************************************ 00:05:50.326 START TEST alias_rpc 00:05:50.326 ************************************ 00:05:50.326 15:42:47 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.639 * Looking for test storage... 00:05:50.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:50.639 15:42:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.639 15:42:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=638251 00:05:50.639 15:42:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.639 15:42:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 638251 00:05:50.639 15:42:47 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 638251 ']' 00:05:50.639 15:42:47 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.639 15:42:47 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.639 15:42:47 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.639 15:42:47 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.639 15:42:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.639 [2024-07-12 15:42:47.713251] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:50.639 [2024-07-12 15:42:47.713334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid638251 ] 00:05:50.639 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.639 [2024-07-12 15:42:47.778493] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.639 [2024-07-12 15:42:47.894460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.917 15:42:48 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.917 15:42:48 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:50.917 15:42:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:51.175 15:42:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 638251 00:05:51.175 15:42:48 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 638251 ']' 00:05:51.175 15:42:48 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 638251 00:05:51.175 15:42:48 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:51.175 15:42:48 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.175 15:42:48 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 638251 00:05:51.175 15:42:48 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.175 15:42:48 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.175 15:42:48 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 638251' 00:05:51.175 killing process with pid 638251 00:05:51.175 15:42:48 alias_rpc -- common/autotest_common.sh@967 -- # kill 638251 00:05:51.175 15:42:48 alias_rpc -- common/autotest_common.sh@972 -- # wait 638251 00:05:51.740 00:05:51.740 real 0m1.255s 00:05:51.740 user 0m1.353s 00:05:51.740 sys 0m0.411s 00:05:51.740 15:42:48 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.740 15:42:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.740 ************************************ 00:05:51.740 END TEST alias_rpc 00:05:51.740 ************************************ 00:05:51.740 15:42:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.740 15:42:48 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:51.740 15:42:48 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:51.740 15:42:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.740 15:42:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.740 15:42:48 -- common/autotest_common.sh@10 -- # set +x 00:05:51.740 ************************************ 00:05:51.740 START TEST spdkcli_tcp 00:05:51.740 ************************************ 00:05:51.740 15:42:48 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:51.740 * Looking for test storage... 00:05:51.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:51.740 15:42:48 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:51.740 15:42:48 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:51.740 15:42:48 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:51.740 15:42:48 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:51.740 15:42:48 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:51.740 15:42:48 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:51.740 15:42:48 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:51.740 15:42:48 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.740 15:42:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.740 15:42:48 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=638558 00:05:51.740 15:42:48 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:51.740 15:42:48 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 638558 00:05:51.740 15:42:48 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 638558 ']' 00:05:51.740 15:42:48 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.740 15:42:48 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.740 15:42:48 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.740 15:42:48 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.740 15:42:48 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:51.740 [2024-07-12 15:42:49.008482] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:51.740 [2024-07-12 15:42:49.008583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid638558 ] 00:05:51.997 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.997 [2024-07-12 15:42:49.066484] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.997 [2024-07-12 15:42:49.173735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.997 [2024-07-12 15:42:49.173744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.255 15:42:49 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.255 15:42:49 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:52.255 15:42:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=638571 00:05:52.255 15:42:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:52.255 15:42:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:52.513 [ 00:05:52.513 "bdev_malloc_delete", 00:05:52.513 "bdev_malloc_create", 00:05:52.513 "bdev_null_resize", 00:05:52.513 "bdev_null_delete", 00:05:52.513 "bdev_null_create", 00:05:52.513 "bdev_nvme_cuse_unregister", 00:05:52.513 "bdev_nvme_cuse_register", 00:05:52.513 "bdev_opal_new_user", 00:05:52.513 "bdev_opal_set_lock_state", 00:05:52.513 "bdev_opal_delete", 00:05:52.513 "bdev_opal_get_info", 00:05:52.513 "bdev_opal_create", 00:05:52.513 "bdev_nvme_opal_revert", 00:05:52.513 "bdev_nvme_opal_init", 00:05:52.513 "bdev_nvme_send_cmd", 00:05:52.513 "bdev_nvme_get_path_iostat", 00:05:52.513 "bdev_nvme_get_mdns_discovery_info", 00:05:52.513 "bdev_nvme_stop_mdns_discovery", 00:05:52.513 "bdev_nvme_start_mdns_discovery", 00:05:52.513 "bdev_nvme_set_multipath_policy", 00:05:52.513 "bdev_nvme_set_preferred_path", 00:05:52.513 "bdev_nvme_get_io_paths", 00:05:52.513 "bdev_nvme_remove_error_injection", 00:05:52.513 "bdev_nvme_add_error_injection", 00:05:52.513 "bdev_nvme_get_discovery_info", 00:05:52.513 "bdev_nvme_stop_discovery", 00:05:52.513 "bdev_nvme_start_discovery", 00:05:52.513 "bdev_nvme_get_controller_health_info", 00:05:52.513 "bdev_nvme_disable_controller", 00:05:52.513 "bdev_nvme_enable_controller", 00:05:52.513 "bdev_nvme_reset_controller", 00:05:52.513 "bdev_nvme_get_transport_statistics", 00:05:52.513 "bdev_nvme_apply_firmware", 00:05:52.513 "bdev_nvme_detach_controller", 00:05:52.513 "bdev_nvme_get_controllers", 00:05:52.513 "bdev_nvme_attach_controller", 00:05:52.513 "bdev_nvme_set_hotplug", 00:05:52.513 "bdev_nvme_set_options", 00:05:52.513 "bdev_passthru_delete", 00:05:52.513 "bdev_passthru_create", 00:05:52.513 "bdev_lvol_set_parent_bdev", 00:05:52.513 "bdev_lvol_set_parent", 00:05:52.513 "bdev_lvol_check_shallow_copy", 00:05:52.513 "bdev_lvol_start_shallow_copy", 00:05:52.513 "bdev_lvol_grow_lvstore", 00:05:52.513 "bdev_lvol_get_lvols", 00:05:52.513 "bdev_lvol_get_lvstores", 00:05:52.513 "bdev_lvol_delete", 00:05:52.513 "bdev_lvol_set_read_only", 00:05:52.513 "bdev_lvol_resize", 00:05:52.513 "bdev_lvol_decouple_parent", 00:05:52.513 "bdev_lvol_inflate", 00:05:52.513 "bdev_lvol_rename", 00:05:52.513 "bdev_lvol_clone_bdev", 00:05:52.513 "bdev_lvol_clone", 00:05:52.513 "bdev_lvol_snapshot", 00:05:52.513 "bdev_lvol_create", 00:05:52.513 "bdev_lvol_delete_lvstore", 00:05:52.513 "bdev_lvol_rename_lvstore", 00:05:52.513 "bdev_lvol_create_lvstore", 00:05:52.513 "bdev_raid_set_options", 00:05:52.513 "bdev_raid_remove_base_bdev", 00:05:52.513 "bdev_raid_add_base_bdev", 00:05:52.513 "bdev_raid_delete", 00:05:52.513 "bdev_raid_create", 00:05:52.513 "bdev_raid_get_bdevs", 00:05:52.513 "bdev_error_inject_error", 00:05:52.513 "bdev_error_delete", 00:05:52.513 "bdev_error_create", 00:05:52.513 "bdev_split_delete", 00:05:52.513 "bdev_split_create", 00:05:52.513 "bdev_delay_delete", 00:05:52.513 "bdev_delay_create", 00:05:52.513 "bdev_delay_update_latency", 00:05:52.513 "bdev_zone_block_delete", 00:05:52.513 "bdev_zone_block_create", 00:05:52.513 "blobfs_create", 00:05:52.513 "blobfs_detect", 00:05:52.513 "blobfs_set_cache_size", 00:05:52.513 "bdev_aio_delete", 00:05:52.513 "bdev_aio_rescan", 00:05:52.513 "bdev_aio_create", 00:05:52.513 "bdev_ftl_set_property", 00:05:52.513 "bdev_ftl_get_properties", 00:05:52.514 "bdev_ftl_get_stats", 00:05:52.514 "bdev_ftl_unmap", 00:05:52.514 "bdev_ftl_unload", 00:05:52.514 "bdev_ftl_delete", 00:05:52.514 "bdev_ftl_load", 00:05:52.514 "bdev_ftl_create", 00:05:52.514 "bdev_virtio_attach_controller", 00:05:52.514 "bdev_virtio_scsi_get_devices", 00:05:52.514 "bdev_virtio_detach_controller", 00:05:52.514 "bdev_virtio_blk_set_hotplug", 00:05:52.514 "bdev_iscsi_delete", 00:05:52.514 "bdev_iscsi_create", 00:05:52.514 "bdev_iscsi_set_options", 00:05:52.514 "accel_error_inject_error", 00:05:52.514 "ioat_scan_accel_module", 00:05:52.514 "dsa_scan_accel_module", 00:05:52.514 "iaa_scan_accel_module", 00:05:52.514 "vfu_virtio_create_scsi_endpoint", 00:05:52.514 "vfu_virtio_scsi_remove_target", 00:05:52.514 "vfu_virtio_scsi_add_target", 00:05:52.514 "vfu_virtio_create_blk_endpoint", 00:05:52.514 "vfu_virtio_delete_endpoint", 00:05:52.514 "keyring_file_remove_key", 00:05:52.514 "keyring_file_add_key", 00:05:52.514 "keyring_linux_set_options", 00:05:52.514 "iscsi_get_histogram", 00:05:52.514 "iscsi_enable_histogram", 00:05:52.514 "iscsi_set_options", 00:05:52.514 "iscsi_get_auth_groups", 00:05:52.514 "iscsi_auth_group_remove_secret", 00:05:52.514 "iscsi_auth_group_add_secret", 00:05:52.514 "iscsi_delete_auth_group", 00:05:52.514 "iscsi_create_auth_group", 00:05:52.514 "iscsi_set_discovery_auth", 00:05:52.514 "iscsi_get_options", 00:05:52.514 "iscsi_target_node_request_logout", 00:05:52.514 "iscsi_target_node_set_redirect", 00:05:52.514 "iscsi_target_node_set_auth", 00:05:52.514 "iscsi_target_node_add_lun", 00:05:52.514 "iscsi_get_stats", 00:05:52.514 "iscsi_get_connections", 00:05:52.514 "iscsi_portal_group_set_auth", 00:05:52.514 "iscsi_start_portal_group", 00:05:52.514 "iscsi_delete_portal_group", 00:05:52.514 "iscsi_create_portal_group", 00:05:52.514 "iscsi_get_portal_groups", 00:05:52.514 "iscsi_delete_target_node", 00:05:52.514 "iscsi_target_node_remove_pg_ig_maps", 00:05:52.514 "iscsi_target_node_add_pg_ig_maps", 00:05:52.514 "iscsi_create_target_node", 00:05:52.514 "iscsi_get_target_nodes", 00:05:52.514 "iscsi_delete_initiator_group", 00:05:52.514 "iscsi_initiator_group_remove_initiators", 00:05:52.514 "iscsi_initiator_group_add_initiators", 00:05:52.514 "iscsi_create_initiator_group", 00:05:52.514 "iscsi_get_initiator_groups", 00:05:52.514 "nvmf_set_crdt", 00:05:52.514 "nvmf_set_config", 00:05:52.514 "nvmf_set_max_subsystems", 00:05:52.514 "nvmf_stop_mdns_prr", 00:05:52.514 "nvmf_publish_mdns_prr", 00:05:52.514 "nvmf_subsystem_get_listeners", 00:05:52.514 "nvmf_subsystem_get_qpairs", 00:05:52.514 "nvmf_subsystem_get_controllers", 00:05:52.514 "nvmf_get_stats", 00:05:52.514 "nvmf_get_transports", 00:05:52.514 "nvmf_create_transport", 00:05:52.514 "nvmf_get_targets", 00:05:52.514 "nvmf_delete_target", 00:05:52.514 "nvmf_create_target", 00:05:52.514 "nvmf_subsystem_allow_any_host", 00:05:52.514 "nvmf_subsystem_remove_host", 00:05:52.514 "nvmf_subsystem_add_host", 00:05:52.514 "nvmf_ns_remove_host", 00:05:52.514 "nvmf_ns_add_host", 00:05:52.514 "nvmf_subsystem_remove_ns", 00:05:52.514 "nvmf_subsystem_add_ns", 00:05:52.514 "nvmf_subsystem_listener_set_ana_state", 00:05:52.514 "nvmf_discovery_get_referrals", 00:05:52.514 "nvmf_discovery_remove_referral", 00:05:52.514 "nvmf_discovery_add_referral", 00:05:52.514 "nvmf_subsystem_remove_listener", 00:05:52.514 "nvmf_subsystem_add_listener", 00:05:52.514 "nvmf_delete_subsystem", 00:05:52.514 "nvmf_create_subsystem", 00:05:52.514 "nvmf_get_subsystems", 00:05:52.514 "env_dpdk_get_mem_stats", 00:05:52.514 "nbd_get_disks", 00:05:52.514 "nbd_stop_disk", 00:05:52.514 "nbd_start_disk", 00:05:52.514 "ublk_recover_disk", 00:05:52.514 "ublk_get_disks", 00:05:52.514 "ublk_stop_disk", 00:05:52.514 "ublk_start_disk", 00:05:52.514 "ublk_destroy_target", 00:05:52.514 "ublk_create_target", 00:05:52.514 "virtio_blk_create_transport", 00:05:52.514 "virtio_blk_get_transports", 00:05:52.514 "vhost_controller_set_coalescing", 00:05:52.514 "vhost_get_controllers", 00:05:52.514 "vhost_delete_controller", 00:05:52.514 "vhost_create_blk_controller", 00:05:52.514 "vhost_scsi_controller_remove_target", 00:05:52.514 "vhost_scsi_controller_add_target", 00:05:52.514 "vhost_start_scsi_controller", 00:05:52.514 "vhost_create_scsi_controller", 00:05:52.514 "thread_set_cpumask", 00:05:52.514 "framework_get_governor", 00:05:52.514 "framework_get_scheduler", 00:05:52.514 "framework_set_scheduler", 00:05:52.514 "framework_get_reactors", 00:05:52.514 "thread_get_io_channels", 00:05:52.514 "thread_get_pollers", 00:05:52.514 "thread_get_stats", 00:05:52.514 "framework_monitor_context_switch", 00:05:52.514 "spdk_kill_instance", 00:05:52.514 "log_enable_timestamps", 00:05:52.514 "log_get_flags", 00:05:52.514 "log_clear_flag", 00:05:52.514 "log_set_flag", 00:05:52.514 "log_get_level", 00:05:52.514 "log_set_level", 00:05:52.514 "log_get_print_level", 00:05:52.514 "log_set_print_level", 00:05:52.514 "framework_enable_cpumask_locks", 00:05:52.514 "framework_disable_cpumask_locks", 00:05:52.514 "framework_wait_init", 00:05:52.514 "framework_start_init", 00:05:52.514 "scsi_get_devices", 00:05:52.514 "bdev_get_histogram", 00:05:52.514 "bdev_enable_histogram", 00:05:52.514 "bdev_set_qos_limit", 00:05:52.514 "bdev_set_qd_sampling_period", 00:05:52.514 "bdev_get_bdevs", 00:05:52.514 "bdev_reset_iostat", 00:05:52.514 "bdev_get_iostat", 00:05:52.514 "bdev_examine", 00:05:52.514 "bdev_wait_for_examine", 00:05:52.514 "bdev_set_options", 00:05:52.514 "notify_get_notifications", 00:05:52.514 "notify_get_types", 00:05:52.514 "accel_get_stats", 00:05:52.514 "accel_set_options", 00:05:52.514 "accel_set_driver", 00:05:52.514 "accel_crypto_key_destroy", 00:05:52.514 "accel_crypto_keys_get", 00:05:52.514 "accel_crypto_key_create", 00:05:52.514 "accel_assign_opc", 00:05:52.514 "accel_get_module_info", 00:05:52.514 "accel_get_opc_assignments", 00:05:52.514 "vmd_rescan", 00:05:52.514 "vmd_remove_device", 00:05:52.514 "vmd_enable", 00:05:52.514 "sock_get_default_impl", 00:05:52.514 "sock_set_default_impl", 00:05:52.514 "sock_impl_set_options", 00:05:52.514 "sock_impl_get_options", 00:05:52.514 "iobuf_get_stats", 00:05:52.514 "iobuf_set_options", 00:05:52.514 "keyring_get_keys", 00:05:52.514 "framework_get_pci_devices", 00:05:52.514 "framework_get_config", 00:05:52.514 "framework_get_subsystems", 00:05:52.514 "vfu_tgt_set_base_path", 00:05:52.514 "trace_get_info", 00:05:52.514 "trace_get_tpoint_group_mask", 00:05:52.514 "trace_disable_tpoint_group", 00:05:52.514 "trace_enable_tpoint_group", 00:05:52.514 "trace_clear_tpoint_mask", 00:05:52.514 "trace_set_tpoint_mask", 00:05:52.514 "spdk_get_version", 00:05:52.514 "rpc_get_methods" 00:05:52.514 ] 00:05:52.514 15:42:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:52.514 15:42:49 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.514 15:42:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.514 15:42:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:52.514 15:42:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 638558 00:05:52.514 15:42:49 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 638558 ']' 00:05:52.514 15:42:49 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 638558 00:05:52.514 15:42:49 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:52.514 15:42:49 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.514 15:42:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 638558 00:05:52.514 15:42:49 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.514 15:42:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.514 15:42:49 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 638558' 00:05:52.514 killing process with pid 638558 00:05:52.514 15:42:49 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 638558 00:05:52.514 15:42:49 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 638558 00:05:53.112 00:05:53.112 real 0m1.258s 00:05:53.112 user 0m2.223s 00:05:53.112 sys 0m0.429s 00:05:53.112 15:42:50 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.112 15:42:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.112 ************************************ 00:05:53.112 END TEST spdkcli_tcp 00:05:53.112 ************************************ 00:05:53.112 15:42:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.112 15:42:50 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.112 15:42:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.112 15:42:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.112 15:42:50 -- common/autotest_common.sh@10 -- # set +x 00:05:53.112 ************************************ 00:05:53.112 START TEST dpdk_mem_utility 00:05:53.112 ************************************ 00:05:53.112 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.112 * Looking for test storage... 00:05:53.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:53.112 15:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:53.112 15:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=638767 00:05:53.112 15:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.112 15:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 638767 00:05:53.112 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 638767 ']' 00:05:53.112 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.112 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.112 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.112 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.112 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.112 [2024-07-12 15:42:50.320965] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:53.112 [2024-07-12 15:42:50.321067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid638767 ] 00:05:53.112 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.112 [2024-07-12 15:42:50.377856] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.370 [2024-07-12 15:42:50.484606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.627 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.627 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:53.627 15:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:53.627 15:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:53.627 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.627 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.627 { 00:05:53.627 "filename": "/tmp/spdk_mem_dump.txt" 00:05:53.627 } 00:05:53.627 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.627 15:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:53.627 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:53.627 1 heaps totaling size 814.000000 MiB 00:05:53.627 size: 814.000000 MiB heap id: 0 00:05:53.627 end heaps---------- 00:05:53.627 8 mempools totaling size 598.116089 MiB 00:05:53.627 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:53.627 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:53.627 size: 84.521057 MiB name: bdev_io_638767 00:05:53.627 size: 51.011292 MiB name: evtpool_638767 00:05:53.627 size: 50.003479 MiB name: msgpool_638767 00:05:53.627 size: 21.763794 MiB name: PDU_Pool 00:05:53.627 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:53.627 size: 0.026123 MiB name: Session_Pool 00:05:53.627 end mempools------- 00:05:53.627 6 memzones totaling size 4.142822 MiB 00:05:53.627 size: 1.000366 MiB name: RG_ring_0_638767 00:05:53.627 size: 1.000366 MiB name: RG_ring_1_638767 00:05:53.627 size: 1.000366 MiB name: RG_ring_4_638767 00:05:53.627 size: 1.000366 MiB name: RG_ring_5_638767 00:05:53.627 size: 0.125366 MiB name: RG_ring_2_638767 00:05:53.627 size: 0.015991 MiB name: RG_ring_3_638767 00:05:53.627 end memzones------- 00:05:53.627 15:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:53.627 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:53.627 list of free elements. size: 12.519348 MiB 00:05:53.627 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:53.627 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:53.627 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:53.627 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:53.627 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:53.627 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:53.627 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:53.627 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:53.627 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:53.627 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:53.627 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:53.627 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:53.627 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:53.627 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:53.627 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:53.627 list of standard malloc elements. size: 199.218079 MiB 00:05:53.627 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:53.627 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:53.627 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:53.628 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:53.628 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:53.628 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:53.628 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:53.628 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:53.628 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:53.628 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:53.628 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:53.628 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:53.628 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:53.628 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:53.628 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:53.628 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:53.628 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:53.628 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:53.628 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:53.628 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:53.628 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:53.628 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:53.628 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:53.628 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:53.628 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:53.628 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:53.628 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:53.628 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:53.628 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:53.628 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:53.628 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:53.628 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:53.628 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:53.628 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:53.628 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:53.628 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:53.628 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:53.628 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:53.628 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:53.628 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:53.628 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:53.628 list of memzone associated elements. size: 602.262573 MiB 00:05:53.628 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:53.628 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:53.628 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:53.628 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:53.628 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:53.628 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_638767_0 00:05:53.628 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:53.628 associated memzone info: size: 48.002930 MiB name: MP_evtpool_638767_0 00:05:53.628 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:53.628 associated memzone info: size: 48.002930 MiB name: MP_msgpool_638767_0 00:05:53.628 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:53.628 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:53.628 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:53.628 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:53.628 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:53.628 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_638767 00:05:53.628 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:53.628 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_638767 00:05:53.628 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:53.628 associated memzone info: size: 1.007996 MiB name: MP_evtpool_638767 00:05:53.628 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:53.628 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:53.628 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:53.628 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:53.628 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:53.628 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:53.628 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:53.628 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:53.628 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:53.628 associated memzone info: size: 1.000366 MiB name: RG_ring_0_638767 00:05:53.628 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:53.628 associated memzone info: size: 1.000366 MiB name: RG_ring_1_638767 00:05:53.628 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:53.628 associated memzone info: size: 1.000366 MiB name: RG_ring_4_638767 00:05:53.628 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:53.628 associated memzone info: size: 1.000366 MiB name: RG_ring_5_638767 00:05:53.628 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:53.628 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_638767 00:05:53.628 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:53.628 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:53.628 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:53.628 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:53.628 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:53.628 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:53.628 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:53.628 associated memzone info: size: 0.125366 MiB name: RG_ring_2_638767 00:05:53.628 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:53.628 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:53.628 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:53.628 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:53.628 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:53.628 associated memzone info: size: 0.015991 MiB name: RG_ring_3_638767 00:05:53.628 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:53.628 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:53.628 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:53.628 associated memzone info: size: 0.000183 MiB name: MP_msgpool_638767 00:05:53.628 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:53.628 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_638767 00:05:53.628 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:53.628 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:53.628 15:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:53.628 15:42:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 638767 00:05:53.628 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 638767 ']' 00:05:53.628 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 638767 00:05:53.628 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:53.628 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.628 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 638767 00:05:53.628 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.628 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.628 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 638767' 00:05:53.628 killing process with pid 638767 00:05:53.628 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 638767 00:05:53.628 15:42:50 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 638767 00:05:54.194 00:05:54.194 real 0m1.076s 00:05:54.194 user 0m1.019s 00:05:54.194 sys 0m0.422s 00:05:54.194 15:42:51 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.194 15:42:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.194 ************************************ 00:05:54.194 END TEST dpdk_mem_utility 00:05:54.194 ************************************ 00:05:54.194 15:42:51 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.194 15:42:51 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:54.194 15:42:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.194 15:42:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.194 15:42:51 -- common/autotest_common.sh@10 -- # set +x 00:05:54.194 ************************************ 00:05:54.194 START TEST event 00:05:54.194 ************************************ 00:05:54.194 15:42:51 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:54.194 * Looking for test storage... 00:05:54.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:54.194 15:42:51 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:54.194 15:42:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:54.194 15:42:51 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:54.194 15:42:51 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:54.194 15:42:51 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.194 15:42:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.194 ************************************ 00:05:54.194 START TEST event_perf 00:05:54.194 ************************************ 00:05:54.194 15:42:51 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:54.194 Running I/O for 1 seconds...[2024-07-12 15:42:51.437554] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:54.194 [2024-07-12 15:42:51.437619] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid638955 ] 00:05:54.194 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.453 [2024-07-12 15:42:51.497108] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.453 [2024-07-12 15:42:51.600185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.453 [2024-07-12 15:42:51.600246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.453 [2024-07-12 15:42:51.600355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.453 [2024-07-12 15:42:51.600358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.826 Running I/O for 1 seconds... 00:05:55.826 lcore 0: 225120 00:05:55.826 lcore 1: 225118 00:05:55.826 lcore 2: 225118 00:05:55.826 lcore 3: 225120 00:05:55.826 done. 00:05:55.826 00:05:55.826 real 0m1.290s 00:05:55.826 user 0m4.211s 00:05:55.826 sys 0m0.075s 00:05:55.826 15:42:52 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.826 15:42:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.826 ************************************ 00:05:55.826 END TEST event_perf 00:05:55.826 ************************************ 00:05:55.826 15:42:52 event -- common/autotest_common.sh@1142 -- # return 0 00:05:55.826 15:42:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:55.826 15:42:52 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:55.826 15:42:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.826 15:42:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.826 ************************************ 00:05:55.826 START TEST event_reactor 00:05:55.826 ************************************ 00:05:55.826 15:42:52 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:55.826 [2024-07-12 15:42:52.779042] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:55.826 [2024-07-12 15:42:52.779111] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639112 ] 00:05:55.826 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.826 [2024-07-12 15:42:52.836478] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.826 [2024-07-12 15:42:52.939249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.759 test_start 00:05:56.759 oneshot 00:05:56.759 tick 100 00:05:56.759 tick 100 00:05:56.759 tick 250 00:05:56.759 tick 100 00:05:56.759 tick 100 00:05:56.759 tick 100 00:05:56.759 tick 250 00:05:56.759 tick 500 00:05:56.759 tick 100 00:05:56.759 tick 100 00:05:56.759 tick 250 00:05:56.759 tick 100 00:05:56.759 tick 100 00:05:56.759 test_end 00:05:56.759 00:05:56.759 real 0m1.283s 00:05:56.759 user 0m1.202s 00:05:56.759 sys 0m0.077s 00:05:56.759 15:42:54 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.759 15:42:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:56.759 ************************************ 00:05:56.759 END TEST event_reactor 00:05:56.759 ************************************ 00:05:57.018 15:42:54 event -- common/autotest_common.sh@1142 -- # return 0 00:05:57.018 15:42:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.018 15:42:54 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:57.018 15:42:54 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.018 15:42:54 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.018 ************************************ 00:05:57.018 START TEST event_reactor_perf 00:05:57.018 ************************************ 00:05:57.018 15:42:54 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.018 [2024-07-12 15:42:54.115082] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:57.018 [2024-07-12 15:42:54.115148] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639264 ] 00:05:57.018 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.018 [2024-07-12 15:42:54.173256] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.018 [2024-07-12 15:42:54.276141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.391 test_start 00:05:58.391 test_end 00:05:58.391 Performance: 442925 events per second 00:05:58.391 00:05:58.391 real 0m1.287s 00:05:58.391 user 0m1.205s 00:05:58.391 sys 0m0.078s 00:05:58.391 15:42:55 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.392 15:42:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.392 ************************************ 00:05:58.392 END TEST event_reactor_perf 00:05:58.392 ************************************ 00:05:58.392 15:42:55 event -- common/autotest_common.sh@1142 -- # return 0 00:05:58.392 15:42:55 event -- event/event.sh@49 -- # uname -s 00:05:58.392 15:42:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:58.392 15:42:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:58.392 15:42:55 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.392 15:42:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.392 15:42:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.392 ************************************ 00:05:58.392 START TEST event_scheduler 00:05:58.392 ************************************ 00:05:58.392 15:42:55 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:58.392 * Looking for test storage... 00:05:58.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:58.392 15:42:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:58.392 15:42:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=639450 00:05:58.392 15:42:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:58.392 15:42:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.392 15:42:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 639450 00:05:58.392 15:42:55 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 639450 ']' 00:05:58.392 15:42:55 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.392 15:42:55 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.392 15:42:55 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.392 15:42:55 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.392 15:42:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.392 [2024-07-12 15:42:55.540606] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:05:58.392 [2024-07-12 15:42:55.540680] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639450 ] 00:05:58.392 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.392 [2024-07-12 15:42:55.604483] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.651 [2024-07-12 15:42:55.722809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.651 [2024-07-12 15:42:55.722881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.651 [2024-07-12 15:42:55.722939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.651 [2024-07-12 15:42:55.722942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.651 15:42:55 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.651 15:42:55 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:58.651 15:42:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:58.651 15:42:55 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.651 15:42:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.651 [2024-07-12 15:42:55.771677] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:58.651 [2024-07-12 15:42:55.771703] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:58.651 [2024-07-12 15:42:55.771735] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:58.651 [2024-07-12 15:42:55.771754] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:58.651 [2024-07-12 15:42:55.771765] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:58.651 15:42:55 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.651 15:42:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:58.651 15:42:55 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.651 15:42:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.651 [2024-07-12 15:42:55.870686] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:58.651 15:42:55 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.651 15:42:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:58.651 15:42:55 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.651 15:42:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.651 15:42:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.651 ************************************ 00:05:58.651 START TEST scheduler_create_thread 00:05:58.651 ************************************ 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.651 2 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.651 3 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.651 4 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.651 5 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.651 6 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.651 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.909 7 00:05:58.909 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.909 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:58.909 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.909 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.909 8 00:05:58.909 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.909 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.910 9 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.910 10 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.910 15:42:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.910 15:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:58.910 15:42:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:58.910 15:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.910 15:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.476 15:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.476 00:05:59.476 real 0m0.591s 00:05:59.476 user 0m0.010s 00:05:59.476 sys 0m0.004s 00:05:59.476 15:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.476 15:42:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.476 ************************************ 00:05:59.476 END TEST scheduler_create_thread 00:05:59.476 ************************************ 00:05:59.476 15:42:56 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:59.476 15:42:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:59.476 15:42:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 639450 00:05:59.476 15:42:56 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 639450 ']' 00:05:59.476 15:42:56 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 639450 00:05:59.476 15:42:56 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:59.476 15:42:56 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.476 15:42:56 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 639450 00:05:59.476 15:42:56 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:59.476 15:42:56 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:59.476 15:42:56 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 639450' 00:05:59.476 killing process with pid 639450 00:05:59.476 15:42:56 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 639450 00:05:59.476 15:42:56 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 639450 00:05:59.735 [2024-07-12 15:42:56.970868] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.994 00:05:59.994 real 0m1.788s 00:05:59.994 user 0m2.275s 00:05:59.994 sys 0m0.335s 00:05:59.994 15:42:57 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.994 15:42:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.994 ************************************ 00:05:59.994 END TEST event_scheduler 00:05:59.994 ************************************ 00:05:59.994 15:42:57 event -- common/autotest_common.sh@1142 -- # return 0 00:05:59.994 15:42:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.994 15:42:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.994 15:42:57 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.994 15:42:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.994 15:42:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.253 ************************************ 00:06:00.253 START TEST app_repeat 00:06:00.253 ************************************ 00:06:00.253 15:42:57 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=639766 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 639766' 00:06:00.253 Process app_repeat pid: 639766 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:00.253 spdk_app_start Round 0 00:06:00.253 15:42:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 639766 /var/tmp/spdk-nbd.sock 00:06:00.253 15:42:57 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 639766 ']' 00:06:00.253 15:42:57 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.253 15:42:57 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.253 15:42:57 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.253 15:42:57 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.253 15:42:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.253 [2024-07-12 15:42:57.315995] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:00.253 [2024-07-12 15:42:57.316068] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid639766 ] 00:06:00.253 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.253 [2024-07-12 15:42:57.373622] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.253 [2024-07-12 15:42:57.474225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.253 [2024-07-12 15:42:57.474229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.511 15:42:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.511 15:42:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:00.512 15:42:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.770 Malloc0 00:06:00.770 15:42:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.028 Malloc1 00:06:01.028 15:42:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.028 15:42:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.028 15:42:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.028 15:42:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.028 15:42:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.028 15:42:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.028 15:42:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.028 15:42:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.028 15:42:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.028 15:42:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.028 15:42:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.029 15:42:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.029 15:42:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.029 15:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.029 15:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.029 15:42:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:01.287 /dev/nbd0 00:06:01.287 15:42:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:01.287 15:42:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.287 1+0 records in 00:06:01.287 1+0 records out 00:06:01.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000145224 s, 28.2 MB/s 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.287 15:42:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:01.287 15:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.287 15:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.287 15:42:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.546 /dev/nbd1 00:06:01.546 15:42:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.546 15:42:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.546 1+0 records in 00:06:01.546 1+0 records out 00:06:01.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191909 s, 21.3 MB/s 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:01.546 15:42:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:01.546 15:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.546 15:42:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.546 15:42:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.546 15:42:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.546 15:42:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.804 { 00:06:01.804 "nbd_device": "/dev/nbd0", 00:06:01.804 "bdev_name": "Malloc0" 00:06:01.804 }, 00:06:01.804 { 00:06:01.804 "nbd_device": "/dev/nbd1", 00:06:01.804 "bdev_name": "Malloc1" 00:06:01.804 } 00:06:01.804 ]' 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.804 { 00:06:01.804 "nbd_device": "/dev/nbd0", 00:06:01.804 "bdev_name": "Malloc0" 00:06:01.804 }, 00:06:01.804 { 00:06:01.804 "nbd_device": "/dev/nbd1", 00:06:01.804 "bdev_name": "Malloc1" 00:06:01.804 } 00:06:01.804 ]' 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.804 /dev/nbd1' 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.804 /dev/nbd1' 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.804 256+0 records in 00:06:01.804 256+0 records out 00:06:01.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00396958 s, 264 MB/s 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.804 256+0 records in 00:06:01.804 256+0 records out 00:06:01.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.022036 s, 47.6 MB/s 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.804 256+0 records in 00:06:01.804 256+0 records out 00:06:01.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264735 s, 39.6 MB/s 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.804 15:42:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.804 15:42:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.805 15:42:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.805 15:42:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.805 15:42:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.805 15:42:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.805 15:42:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.805 15:42:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.805 15:42:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.066 15:42:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.067 15:42:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.067 15:42:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.067 15:42:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.067 15:42:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.067 15:42:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.067 15:42:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.067 15:42:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.067 15:42:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.067 15:42:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.327 15:42:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.327 15:42:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.327 15:42:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.327 15:42:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.327 15:42:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.327 15:42:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.327 15:42:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.327 15:42:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.327 15:42:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.327 15:42:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.327 15:42:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.584 15:42:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.584 15:42:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.584 15:42:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.584 15:42:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.584 15:42:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.584 15:42:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.584 15:42:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.584 15:42:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.584 15:42:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.584 15:42:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.584 15:42:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.584 15:42:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.584 15:42:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.842 15:43:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.099 [2024-07-12 15:43:00.339588] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.357 [2024-07-12 15:43:00.442644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.357 [2024-07-12 15:43:00.442646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.357 [2024-07-12 15:43:00.501406] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.357 [2024-07-12 15:43:00.501475] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.894 15:43:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.894 15:43:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:05.894 spdk_app_start Round 1 00:06:05.894 15:43:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 639766 /var/tmp/spdk-nbd.sock 00:06:05.894 15:43:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 639766 ']' 00:06:05.894 15:43:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.894 15:43:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.894 15:43:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.894 15:43:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.894 15:43:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.157 15:43:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.157 15:43:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:06.157 15:43:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.414 Malloc0 00:06:06.414 15:43:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.673 Malloc1 00:06:06.673 15:43:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.673 15:43:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.930 /dev/nbd0 00:06:06.930 15:43:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.930 15:43:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.930 15:43:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:06.930 15:43:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:06.931 15:43:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:06.931 15:43:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:06.931 15:43:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:06.931 15:43:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:06.931 15:43:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:06.931 15:43:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:06.931 15:43:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.931 1+0 records in 00:06:06.931 1+0 records out 00:06:06.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253252 s, 16.2 MB/s 00:06:06.931 15:43:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.931 15:43:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:06.931 15:43:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.931 15:43:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:06.931 15:43:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:06.931 15:43:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.931 15:43:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.931 15:43:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.188 /dev/nbd1 00:06:07.188 15:43:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.188 15:43:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.189 1+0 records in 00:06:07.189 1+0 records out 00:06:07.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175525 s, 23.3 MB/s 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:07.189 15:43:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:07.189 15:43:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.189 15:43:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.189 15:43:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.189 15:43:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.189 15:43:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.447 { 00:06:07.447 "nbd_device": "/dev/nbd0", 00:06:07.447 "bdev_name": "Malloc0" 00:06:07.447 }, 00:06:07.447 { 00:06:07.447 "nbd_device": "/dev/nbd1", 00:06:07.447 "bdev_name": "Malloc1" 00:06:07.447 } 00:06:07.447 ]' 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.447 { 00:06:07.447 "nbd_device": "/dev/nbd0", 00:06:07.447 "bdev_name": "Malloc0" 00:06:07.447 }, 00:06:07.447 { 00:06:07.447 "nbd_device": "/dev/nbd1", 00:06:07.447 "bdev_name": "Malloc1" 00:06:07.447 } 00:06:07.447 ]' 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.447 /dev/nbd1' 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.447 /dev/nbd1' 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.447 256+0 records in 00:06:07.447 256+0 records out 00:06:07.447 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00392446 s, 267 MB/s 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.447 15:43:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.705 256+0 records in 00:06:07.705 256+0 records out 00:06:07.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215263 s, 48.7 MB/s 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.705 256+0 records in 00:06:07.705 256+0 records out 00:06:07.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223317 s, 47.0 MB/s 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.705 15:43:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.962 15:43:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.963 15:43:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.963 15:43:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.963 15:43:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.963 15:43:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.963 15:43:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.963 15:43:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.963 15:43:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.963 15:43:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.963 15:43:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.220 15:43:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.220 15:43:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.220 15:43:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.220 15:43:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.220 15:43:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.220 15:43:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.220 15:43:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:08.220 15:43:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.220 15:43:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.220 15:43:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.220 15:43:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.477 15:43:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.477 15:43:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.477 15:43:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.477 15:43:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.477 15:43:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.477 15:43:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.477 15:43:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.477 15:43:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.477 15:43:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.477 15:43:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.477 15:43:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.477 15:43:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.477 15:43:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.735 15:43:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.994 [2024-07-12 15:43:06.139038] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.994 [2024-07-12 15:43:06.240834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.994 [2024-07-12 15:43:06.240837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.252 [2024-07-12 15:43:06.300906] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.252 [2024-07-12 15:43:06.300976] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.778 15:43:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.778 15:43:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:11.778 spdk_app_start Round 2 00:06:11.778 15:43:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 639766 /var/tmp/spdk-nbd.sock 00:06:11.778 15:43:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 639766 ']' 00:06:11.778 15:43:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.778 15:43:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.778 15:43:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.778 15:43:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.778 15:43:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:12.036 15:43:09 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.036 15:43:09 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:12.036 15:43:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.294 Malloc0 00:06:12.294 15:43:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.552 Malloc1 00:06:12.552 15:43:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.552 15:43:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.810 /dev/nbd0 00:06:12.810 15:43:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.810 15:43:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.810 1+0 records in 00:06:12.810 1+0 records out 00:06:12.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182383 s, 22.5 MB/s 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:12.810 15:43:09 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:12.810 15:43:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.810 15:43:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.810 15:43:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.067 /dev/nbd1 00:06:13.067 15:43:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.067 15:43:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.067 1+0 records in 00:06:13.067 1+0 records out 00:06:13.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161257 s, 25.4 MB/s 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:13.067 15:43:10 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:13.067 15:43:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.067 15:43:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.067 15:43:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.067 15:43:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.067 15:43:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.325 { 00:06:13.325 "nbd_device": "/dev/nbd0", 00:06:13.325 "bdev_name": "Malloc0" 00:06:13.325 }, 00:06:13.325 { 00:06:13.325 "nbd_device": "/dev/nbd1", 00:06:13.325 "bdev_name": "Malloc1" 00:06:13.325 } 00:06:13.325 ]' 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.325 { 00:06:13.325 "nbd_device": "/dev/nbd0", 00:06:13.325 "bdev_name": "Malloc0" 00:06:13.325 }, 00:06:13.325 { 00:06:13.325 "nbd_device": "/dev/nbd1", 00:06:13.325 "bdev_name": "Malloc1" 00:06:13.325 } 00:06:13.325 ]' 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.325 /dev/nbd1' 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.325 /dev/nbd1' 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.325 256+0 records in 00:06:13.325 256+0 records out 00:06:13.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450569 s, 233 MB/s 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.325 256+0 records in 00:06:13.325 256+0 records out 00:06:13.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208427 s, 50.3 MB/s 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.325 256+0 records in 00:06:13.325 256+0 records out 00:06:13.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228418 s, 45.9 MB/s 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.325 15:43:10 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.326 15:43:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.583 15:43:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.583 15:43:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.583 15:43:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.583 15:43:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.583 15:43:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.583 15:43:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.583 15:43:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.583 15:43:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.583 15:43:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.583 15:43:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.839 15:43:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.839 15:43:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.839 15:43:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.839 15:43:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.839 15:43:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.839 15:43:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.839 15:43:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.839 15:43:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.839 15:43:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.839 15:43:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.839 15:43:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.096 15:43:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.096 15:43:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.096 15:43:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.096 15:43:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.096 15:43:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.096 15:43:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.096 15:43:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:14.096 15:43:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.096 15:43:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.096 15:43:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.096 15:43:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.096 15:43:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.096 15:43:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.352 15:43:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:14.609 [2024-07-12 15:43:11.900000] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.868 [2024-07-12 15:43:12.001995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.868 [2024-07-12 15:43:12.001998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.868 [2024-07-12 15:43:12.060748] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:14.868 [2024-07-12 15:43:12.060823] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.452 15:43:14 event.app_repeat -- event/event.sh@38 -- # waitforlisten 639766 /var/tmp/spdk-nbd.sock 00:06:17.452 15:43:14 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 639766 ']' 00:06:17.452 15:43:14 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.452 15:43:14 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:17.452 15:43:14 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.452 15:43:14 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:17.452 15:43:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.710 15:43:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.710 15:43:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:17.710 15:43:14 event.app_repeat -- event/event.sh@39 -- # killprocess 639766 00:06:17.710 15:43:14 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 639766 ']' 00:06:17.710 15:43:14 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 639766 00:06:17.710 15:43:14 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:17.710 15:43:14 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.710 15:43:14 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 639766 00:06:17.710 15:43:14 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.710 15:43:14 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.710 15:43:14 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 639766' 00:06:17.710 killing process with pid 639766 00:06:17.710 15:43:14 event.app_repeat -- common/autotest_common.sh@967 -- # kill 639766 00:06:17.710 15:43:14 event.app_repeat -- common/autotest_common.sh@972 -- # wait 639766 00:06:17.969 spdk_app_start is called in Round 0. 00:06:17.969 Shutdown signal received, stop current app iteration 00:06:17.969 Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 reinitialization... 00:06:17.969 spdk_app_start is called in Round 1. 00:06:17.969 Shutdown signal received, stop current app iteration 00:06:17.969 Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 reinitialization... 00:06:17.969 spdk_app_start is called in Round 2. 00:06:17.969 Shutdown signal received, stop current app iteration 00:06:17.969 Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 reinitialization... 00:06:17.969 spdk_app_start is called in Round 3. 00:06:17.969 Shutdown signal received, stop current app iteration 00:06:17.969 15:43:15 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:17.969 15:43:15 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:17.969 00:06:17.969 real 0m17.869s 00:06:17.969 user 0m38.731s 00:06:17.969 sys 0m3.180s 00:06:17.969 15:43:15 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.969 15:43:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.969 ************************************ 00:06:17.969 END TEST app_repeat 00:06:17.969 ************************************ 00:06:17.969 15:43:15 event -- common/autotest_common.sh@1142 -- # return 0 00:06:17.969 15:43:15 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:17.969 15:43:15 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.969 15:43:15 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:17.969 15:43:15 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.969 15:43:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.969 ************************************ 00:06:17.969 START TEST cpu_locks 00:06:17.969 ************************************ 00:06:17.969 15:43:15 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:17.969 * Looking for test storage... 00:06:17.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:17.969 15:43:15 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:17.969 15:43:15 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:17.969 15:43:15 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:17.969 15:43:15 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:18.228 15:43:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.228 15:43:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.228 15:43:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.228 ************************************ 00:06:18.228 START TEST default_locks 00:06:18.228 ************************************ 00:06:18.228 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:18.228 15:43:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=642131 00:06:18.228 15:43:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.228 15:43:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 642131 00:06:18.228 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 642131 ']' 00:06:18.228 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.228 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.228 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.228 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.228 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.228 [2024-07-12 15:43:15.334886] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:18.228 [2024-07-12 15:43:15.334964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642131 ] 00:06:18.228 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.228 [2024-07-12 15:43:15.391996] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.228 [2024-07-12 15:43:15.499240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.487 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.487 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:18.487 15:43:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 642131 00:06:18.487 15:43:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 642131 00:06:18.487 15:43:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.745 lslocks: write error 00:06:18.745 15:43:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 642131 00:06:18.745 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 642131 ']' 00:06:18.745 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 642131 00:06:18.745 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:18.745 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:18.745 15:43:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 642131 00:06:18.745 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:18.745 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:18.745 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 642131' 00:06:18.745 killing process with pid 642131 00:06:18.745 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 642131 00:06:18.745 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 642131 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 642131 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 642131 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 642131 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 642131 ']' 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (642131) - No such process 00:06:19.310 ERROR: process (pid: 642131) is no longer running 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.310 00:06:19.310 real 0m1.175s 00:06:19.310 user 0m1.123s 00:06:19.310 sys 0m0.490s 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.310 15:43:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 ************************************ 00:06:19.310 END TEST default_locks 00:06:19.310 ************************************ 00:06:19.310 15:43:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:19.310 15:43:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:19.310 15:43:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:19.310 15:43:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.310 15:43:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 ************************************ 00:06:19.310 START TEST default_locks_via_rpc 00:06:19.310 ************************************ 00:06:19.310 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:19.310 15:43:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=642301 00:06:19.310 15:43:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.310 15:43:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 642301 00:06:19.310 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 642301 ']' 00:06:19.310 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.310 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:19.310 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.310 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:19.310 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.310 [2024-07-12 15:43:16.561898] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:19.310 [2024-07-12 15:43:16.561993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642301 ] 00:06:19.310 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.568 [2024-07-12 15:43:16.619838] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.568 [2024-07-12 15:43:16.718706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 642301 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 642301 00:06:19.825 15:43:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.083 15:43:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 642301 00:06:20.083 15:43:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 642301 ']' 00:06:20.083 15:43:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 642301 00:06:20.083 15:43:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:20.083 15:43:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.083 15:43:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 642301 00:06:20.083 15:43:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.083 15:43:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.083 15:43:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 642301' 00:06:20.083 killing process with pid 642301 00:06:20.083 15:43:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 642301 00:06:20.083 15:43:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 642301 00:06:20.647 00:06:20.647 real 0m1.211s 00:06:20.647 user 0m1.124s 00:06:20.647 sys 0m0.501s 00:06:20.647 15:43:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.647 15:43:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.647 ************************************ 00:06:20.647 END TEST default_locks_via_rpc 00:06:20.647 ************************************ 00:06:20.647 15:43:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:20.647 15:43:17 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.648 15:43:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.648 15:43:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.648 15:43:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.648 ************************************ 00:06:20.648 START TEST non_locking_app_on_locked_coremask 00:06:20.648 ************************************ 00:06:20.648 15:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:20.648 15:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=642461 00:06:20.648 15:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.648 15:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 642461 /var/tmp/spdk.sock 00:06:20.648 15:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 642461 ']' 00:06:20.648 15:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.648 15:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.648 15:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.648 15:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.648 15:43:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.648 [2024-07-12 15:43:17.830304] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:20.648 [2024-07-12 15:43:17.830404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642461 ] 00:06:20.648 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.648 [2024-07-12 15:43:17.888137] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.906 [2024-07-12 15:43:18.001639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.164 15:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.164 15:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:21.164 15:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=642584 00:06:21.164 15:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:21.164 15:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 642584 /var/tmp/spdk2.sock 00:06:21.164 15:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 642584 ']' 00:06:21.164 15:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.164 15:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.164 15:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.164 15:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.164 15:43:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.164 [2024-07-12 15:43:18.301053] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:21.164 [2024-07-12 15:43:18.301144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642584 ] 00:06:21.164 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.164 [2024-07-12 15:43:18.384593] app.c: 910:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.164 [2024-07-12 15:43:18.384618] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.421 [2024-07-12 15:43:18.599624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.987 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.987 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:21.987 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 642461 00:06:21.987 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 642461 00:06:21.987 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.551 lslocks: write error 00:06:22.551 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 642461 00:06:22.551 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 642461 ']' 00:06:22.551 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 642461 00:06:22.551 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:22.551 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.551 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 642461 00:06:22.551 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.551 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.551 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 642461' 00:06:22.551 killing process with pid 642461 00:06:22.551 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 642461 00:06:22.551 15:43:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 642461 00:06:23.482 15:43:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 642584 00:06:23.482 15:43:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 642584 ']' 00:06:23.482 15:43:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 642584 00:06:23.482 15:43:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:23.482 15:43:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.482 15:43:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 642584 00:06:23.482 15:43:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.483 15:43:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.483 15:43:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 642584' 00:06:23.483 killing process with pid 642584 00:06:23.483 15:43:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 642584 00:06:23.483 15:43:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 642584 00:06:24.047 00:06:24.047 real 0m3.289s 00:06:24.047 user 0m3.477s 00:06:24.047 sys 0m0.977s 00:06:24.047 15:43:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.047 15:43:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.047 ************************************ 00:06:24.047 END TEST non_locking_app_on_locked_coremask 00:06:24.047 ************************************ 00:06:24.047 15:43:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:24.047 15:43:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:24.047 15:43:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.047 15:43:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.047 15:43:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.047 ************************************ 00:06:24.047 START TEST locking_app_on_unlocked_coremask 00:06:24.047 ************************************ 00:06:24.047 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:24.047 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=642895 00:06:24.047 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:24.047 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 642895 /var/tmp/spdk.sock 00:06:24.047 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 642895 ']' 00:06:24.047 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.047 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.047 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.047 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.047 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.047 [2024-07-12 15:43:21.167366] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:24.047 [2024-07-12 15:43:21.167455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid642895 ] 00:06:24.047 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.047 [2024-07-12 15:43:21.223160] app.c: 910:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.047 [2024-07-12 15:43:21.223200] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.047 [2024-07-12 15:43:21.320813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.305 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.305 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:24.305 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=643024 00:06:24.305 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 643024 /var/tmp/spdk2.sock 00:06:24.305 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.305 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 643024 ']' 00:06:24.305 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.306 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.306 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.306 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.306 15:43:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.563 [2024-07-12 15:43:21.627626] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:24.563 [2024-07-12 15:43:21.627707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid643024 ] 00:06:24.563 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.563 [2024-07-12 15:43:21.709744] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.821 [2024-07-12 15:43:21.924450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.386 15:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.386 15:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:25.386 15:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 643024 00:06:25.386 15:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 643024 00:06:25.386 15:43:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.950 lslocks: write error 00:06:25.950 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 642895 00:06:25.950 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 642895 ']' 00:06:25.950 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 642895 00:06:25.950 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:25.950 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.950 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 642895 00:06:25.950 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.950 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.950 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 642895' 00:06:25.950 killing process with pid 642895 00:06:25.950 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 642895 00:06:25.950 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 642895 00:06:26.882 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 643024 00:06:26.882 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 643024 ']' 00:06:26.882 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 643024 00:06:26.882 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:26.882 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.882 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 643024 00:06:26.882 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.882 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.882 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 643024' 00:06:26.882 killing process with pid 643024 00:06:26.882 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 643024 00:06:26.882 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 643024 00:06:27.142 00:06:27.142 real 0m3.283s 00:06:27.142 user 0m3.467s 00:06:27.142 sys 0m0.983s 00:06:27.142 15:43:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.142 15:43:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.142 ************************************ 00:06:27.142 END TEST locking_app_on_unlocked_coremask 00:06:27.142 ************************************ 00:06:27.142 15:43:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:27.142 15:43:24 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:27.142 15:43:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.142 15:43:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.142 15:43:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.401 ************************************ 00:06:27.401 START TEST locking_app_on_locked_coremask 00:06:27.401 ************************************ 00:06:27.401 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:27.401 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=643329 00:06:27.401 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.401 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 643329 /var/tmp/spdk.sock 00:06:27.401 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 643329 ']' 00:06:27.401 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.401 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.401 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.401 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.401 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.401 [2024-07-12 15:43:24.495679] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:27.401 [2024-07-12 15:43:24.495777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid643329 ] 00:06:27.401 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.401 [2024-07-12 15:43:24.552575] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.401 [2024-07-12 15:43:24.654310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=643457 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 643457 /var/tmp/spdk2.sock 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 643457 /var/tmp/spdk2.sock 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 643457 /var/tmp/spdk2.sock 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 643457 ']' 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.659 15:43:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.659 [2024-07-12 15:43:24.942496] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:27.659 [2024-07-12 15:43:24.942570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid643457 ] 00:06:27.916 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.916 [2024-07-12 15:43:25.031910] app.c: 775:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 643329 has claimed it. 00:06:27.916 [2024-07-12 15:43:25.031968] app.c: 906:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:28.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (643457) - No such process 00:06:28.481 ERROR: process (pid: 643457) is no longer running 00:06:28.481 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.481 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:28.481 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:28.481 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:28.481 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:28.481 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:28.481 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 643329 00:06:28.481 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 643329 00:06:28.481 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.739 lslocks: write error 00:06:28.739 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 643329 00:06:28.739 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 643329 ']' 00:06:28.739 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 643329 00:06:28.739 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:28.739 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.739 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 643329 00:06:28.739 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.739 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.739 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 643329' 00:06:28.739 killing process with pid 643329 00:06:28.739 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 643329 00:06:28.739 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 643329 00:06:29.305 00:06:29.305 real 0m1.969s 00:06:29.305 user 0m2.128s 00:06:29.305 sys 0m0.604s 00:06:29.305 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.305 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.305 ************************************ 00:06:29.305 END TEST locking_app_on_locked_coremask 00:06:29.305 ************************************ 00:06:29.305 15:43:26 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:29.305 15:43:26 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:29.305 15:43:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.305 15:43:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.305 15:43:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.305 ************************************ 00:06:29.305 START TEST locking_overlapped_coremask 00:06:29.305 ************************************ 00:06:29.305 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:29.305 15:43:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=643622 00:06:29.305 15:43:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:29.305 15:43:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 643622 /var/tmp/spdk.sock 00:06:29.305 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 643622 ']' 00:06:29.305 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.305 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.305 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.305 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.305 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.305 [2024-07-12 15:43:26.520251] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:29.305 [2024-07-12 15:43:26.520349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid643622 ] 00:06:29.305 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.305 [2024-07-12 15:43:26.578772] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.563 [2024-07-12 15:43:26.691480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.563 [2024-07-12 15:43:26.691547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.563 [2024-07-12 15:43:26.691550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=643642 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 643642 /var/tmp/spdk2.sock 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 643642 /var/tmp/spdk2.sock 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 643642 /var/tmp/spdk2.sock 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 643642 ']' 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.821 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.822 15:43:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.822 [2024-07-12 15:43:27.009898] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:29.822 [2024-07-12 15:43:27.009986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid643642 ] 00:06:29.822 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.822 [2024-07-12 15:43:27.104806] app.c: 775:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 643622 has claimed it. 00:06:29.822 [2024-07-12 15:43:27.104872] app.c: 906:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (643642) - No such process 00:06:30.755 ERROR: process (pid: 643642) is no longer running 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 643622 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 643622 ']' 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 643622 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 643622 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 643622' 00:06:30.755 killing process with pid 643622 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 643622 00:06:30.755 15:43:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 643622 00:06:31.013 00:06:31.013 real 0m1.716s 00:06:31.013 user 0m4.575s 00:06:31.013 sys 0m0.467s 00:06:31.013 15:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.013 15:43:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.013 ************************************ 00:06:31.013 END TEST locking_overlapped_coremask 00:06:31.013 ************************************ 00:06:31.013 15:43:28 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:31.013 15:43:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:31.013 15:43:28 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.013 15:43:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.013 15:43:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.013 ************************************ 00:06:31.013 START TEST locking_overlapped_coremask_via_rpc 00:06:31.013 ************************************ 00:06:31.013 15:43:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:31.013 15:43:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=643921 00:06:31.013 15:43:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:31.013 15:43:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 643921 /var/tmp/spdk.sock 00:06:31.013 15:43:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 643921 ']' 00:06:31.013 15:43:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.013 15:43:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.013 15:43:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.013 15:43:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.013 15:43:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.013 [2024-07-12 15:43:28.279694] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:31.013 [2024-07-12 15:43:28.279793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid643921 ] 00:06:31.013 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.271 [2024-07-12 15:43:28.348555] app.c: 910:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.271 [2024-07-12 15:43:28.348596] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.271 [2024-07-12 15:43:28.460610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.271 [2024-07-12 15:43:28.460672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.271 [2024-07-12 15:43:28.460675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.204 15:43:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.204 15:43:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:32.204 15:43:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=643944 00:06:32.204 15:43:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 643944 /var/tmp/spdk2.sock 00:06:32.204 15:43:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:32.204 15:43:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 643944 ']' 00:06:32.204 15:43:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.204 15:43:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.204 15:43:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.204 15:43:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.204 15:43:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.204 [2024-07-12 15:43:29.262567] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:32.204 [2024-07-12 15:43:29.262669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid643944 ] 00:06:32.204 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.204 [2024-07-12 15:43:29.357276] app.c: 910:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.204 [2024-07-12 15:43:29.357319] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.462 [2024-07-12 15:43:29.580050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.462 [2024-07-12 15:43:29.580111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:32.462 [2024-07-12 15:43:29.580113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.029 [2024-07-12 15:43:30.223840] app.c: 775:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 643921 has claimed it. 00:06:33.029 request: 00:06:33.029 { 00:06:33.029 "method": "framework_enable_cpumask_locks", 00:06:33.029 "req_id": 1 00:06:33.029 } 00:06:33.029 Got JSON-RPC error response 00:06:33.029 response: 00:06:33.029 { 00:06:33.029 "code": -32603, 00:06:33.029 "message": "Failed to claim CPU core: 2" 00:06:33.029 } 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 643921 /var/tmp/spdk.sock 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 643921 ']' 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.029 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.286 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.286 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:33.286 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 643944 /var/tmp/spdk2.sock 00:06:33.286 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 643944 ']' 00:06:33.286 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.286 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.286 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.287 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.287 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.544 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.544 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:33.545 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:33.545 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:33.545 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:33.545 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:33.545 00:06:33.545 real 0m2.527s 00:06:33.545 user 0m1.238s 00:06:33.545 sys 0m0.218s 00:06:33.545 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.545 15:43:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.545 ************************************ 00:06:33.545 END TEST locking_overlapped_coremask_via_rpc 00:06:33.545 ************************************ 00:06:33.545 15:43:30 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:33.545 15:43:30 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:33.545 15:43:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 643921 ]] 00:06:33.545 15:43:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 643921 00:06:33.545 15:43:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 643921 ']' 00:06:33.545 15:43:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 643921 00:06:33.545 15:43:30 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:33.545 15:43:30 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:33.545 15:43:30 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 643921 00:06:33.545 15:43:30 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:33.545 15:43:30 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:33.545 15:43:30 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 643921' 00:06:33.545 killing process with pid 643921 00:06:33.545 15:43:30 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 643921 00:06:33.545 15:43:30 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 643921 00:06:34.110 15:43:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 643944 ]] 00:06:34.110 15:43:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 643944 00:06:34.110 15:43:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 643944 ']' 00:06:34.110 15:43:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 643944 00:06:34.110 15:43:31 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:34.110 15:43:31 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.110 15:43:31 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 643944 00:06:34.110 15:43:31 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:34.110 15:43:31 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:34.110 15:43:31 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 643944' 00:06:34.110 killing process with pid 643944 00:06:34.110 15:43:31 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 643944 00:06:34.110 15:43:31 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 643944 00:06:34.676 15:43:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:34.676 15:43:31 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:34.676 15:43:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 643921 ]] 00:06:34.676 15:43:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 643921 00:06:34.676 15:43:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 643921 ']' 00:06:34.676 15:43:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 643921 00:06:34.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (643921) - No such process 00:06:34.676 15:43:31 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 643921 is not found' 00:06:34.676 Process with pid 643921 is not found 00:06:34.676 15:43:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 643944 ]] 00:06:34.676 15:43:31 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 643944 00:06:34.676 15:43:31 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 643944 ']' 00:06:34.676 15:43:31 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 643944 00:06:34.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (643944) - No such process 00:06:34.676 15:43:31 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 643944 is not found' 00:06:34.676 Process with pid 643944 is not found 00:06:34.676 15:43:31 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:34.676 00:06:34.676 real 0m16.541s 00:06:34.676 user 0m29.756s 00:06:34.676 sys 0m5.179s 00:06:34.676 15:43:31 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.676 15:43:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.676 ************************************ 00:06:34.676 END TEST cpu_locks 00:06:34.677 ************************************ 00:06:34.677 15:43:31 event -- common/autotest_common.sh@1142 -- # return 0 00:06:34.677 00:06:34.677 real 0m40.429s 00:06:34.677 user 1m17.534s 00:06:34.677 sys 0m9.161s 00:06:34.677 15:43:31 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.677 15:43:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.677 ************************************ 00:06:34.677 END TEST event 00:06:34.677 ************************************ 00:06:34.677 15:43:31 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.677 15:43:31 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:34.677 15:43:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.677 15:43:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.677 15:43:31 -- common/autotest_common.sh@10 -- # set +x 00:06:34.677 ************************************ 00:06:34.677 START TEST thread 00:06:34.677 ************************************ 00:06:34.677 15:43:31 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:34.677 * Looking for test storage... 00:06:34.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:34.677 15:43:31 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:34.677 15:43:31 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:34.677 15:43:31 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.677 15:43:31 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.677 ************************************ 00:06:34.677 START TEST thread_poller_perf 00:06:34.677 ************************************ 00:06:34.677 15:43:31 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:34.677 [2024-07-12 15:43:31.907327] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:34.677 [2024-07-12 15:43:31.907393] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid644429 ] 00:06:34.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.677 [2024-07-12 15:43:31.964693] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.935 [2024-07-12 15:43:32.065468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.935 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:36.305 ====================================== 00:06:36.305 busy:2712738795 (cyc) 00:06:36.305 total_run_count: 369000 00:06:36.305 tsc_hz: 2700000000 (cyc) 00:06:36.305 ====================================== 00:06:36.305 poller_cost: 7351 (cyc), 2722 (nsec) 00:06:36.305 00:06:36.305 real 0m1.290s 00:06:36.305 user 0m1.211s 00:06:36.305 sys 0m0.074s 00:06:36.305 15:43:33 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.305 15:43:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.305 ************************************ 00:06:36.305 END TEST thread_poller_perf 00:06:36.305 ************************************ 00:06:36.305 15:43:33 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:36.305 15:43:33 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.305 15:43:33 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:36.305 15:43:33 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.305 15:43:33 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.305 ************************************ 00:06:36.305 START TEST thread_poller_perf 00:06:36.305 ************************************ 00:06:36.305 15:43:33 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.305 [2024-07-12 15:43:33.248271] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:36.305 [2024-07-12 15:43:33.248337] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid644581 ] 00:06:36.305 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.305 [2024-07-12 15:43:33.306136] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.305 [2024-07-12 15:43:33.414649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.305 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:37.238 ====================================== 00:06:37.238 busy:2702617917 (cyc) 00:06:37.238 total_run_count: 4873000 00:06:37.238 tsc_hz: 2700000000 (cyc) 00:06:37.238 ====================================== 00:06:37.238 poller_cost: 554 (cyc), 205 (nsec) 00:06:37.238 00:06:37.238 real 0m1.291s 00:06:37.238 user 0m1.204s 00:06:37.238 sys 0m0.082s 00:06:37.238 15:43:34 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.238 15:43:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:37.238 ************************************ 00:06:37.238 END TEST thread_poller_perf 00:06:37.238 ************************************ 00:06:37.496 15:43:34 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:37.496 15:43:34 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:37.496 00:06:37.496 real 0m2.734s 00:06:37.496 user 0m2.476s 00:06:37.496 sys 0m0.259s 00:06:37.496 15:43:34 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.496 15:43:34 thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.496 ************************************ 00:06:37.496 END TEST thread 00:06:37.496 ************************************ 00:06:37.496 15:43:34 -- common/autotest_common.sh@1142 -- # return 0 00:06:37.496 15:43:34 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:37.496 15:43:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.496 15:43:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.496 15:43:34 -- common/autotest_common.sh@10 -- # set +x 00:06:37.496 ************************************ 00:06:37.496 START TEST accel 00:06:37.496 ************************************ 00:06:37.496 15:43:34 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:37.496 * Looking for test storage... 00:06:37.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:37.496 15:43:34 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:37.496 15:43:34 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:37.496 15:43:34 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.496 15:43:34 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=644778 00:06:37.496 15:43:34 accel -- accel/accel.sh@63 -- # waitforlisten 644778 00:06:37.496 15:43:34 accel -- common/autotest_common.sh@829 -- # '[' -z 644778 ']' 00:06:37.496 15:43:34 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.496 15:43:34 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:37.496 15:43:34 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.496 15:43:34 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:37.496 15:43:34 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.496 15:43:34 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.496 15:43:34 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.496 15:43:34 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.496 15:43:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.496 15:43:34 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.496 15:43:34 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.496 15:43:34 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.496 15:43:34 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:37.496 15:43:34 accel -- accel/accel.sh@41 -- # jq -r . 00:06:37.496 [2024-07-12 15:43:34.701812] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:37.496 [2024-07-12 15:43:34.701913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid644778 ] 00:06:37.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.496 [2024-07-12 15:43:34.764841] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.754 [2024-07-12 15:43:34.879464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.012 15:43:35 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.012 15:43:35 accel -- common/autotest_common.sh@862 -- # return 0 00:06:38.012 15:43:35 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:38.012 15:43:35 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:38.012 15:43:35 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:38.012 15:43:35 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:38.012 15:43:35 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:38.012 15:43:35 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:38.012 15:43:35 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.012 15:43:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.012 15:43:35 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:38.012 15:43:35 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.012 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.012 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.012 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.012 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.012 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.012 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.012 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.012 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.012 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.012 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.012 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.012 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.012 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.012 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.012 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.012 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.013 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.013 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.013 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.013 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.013 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.013 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.013 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.013 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.013 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.013 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.013 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.013 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.013 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.013 15:43:35 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # IFS== 00:06:38.013 15:43:35 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:38.013 15:43:35 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:38.013 15:43:35 accel -- accel/accel.sh@75 -- # killprocess 644778 00:06:38.013 15:43:35 accel -- common/autotest_common.sh@948 -- # '[' -z 644778 ']' 00:06:38.013 15:43:35 accel -- common/autotest_common.sh@952 -- # kill -0 644778 00:06:38.013 15:43:35 accel -- common/autotest_common.sh@953 -- # uname 00:06:38.013 15:43:35 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.013 15:43:35 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 644778 00:06:38.013 15:43:35 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.013 15:43:35 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.013 15:43:35 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 644778' 00:06:38.013 killing process with pid 644778 00:06:38.013 15:43:35 accel -- common/autotest_common.sh@967 -- # kill 644778 00:06:38.013 15:43:35 accel -- common/autotest_common.sh@972 -- # wait 644778 00:06:38.579 15:43:35 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:38.579 15:43:35 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:38.579 15:43:35 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:38.579 15:43:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.579 15:43:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.579 15:43:35 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:38.579 15:43:35 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:38.579 15:43:35 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:38.579 15:43:35 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.579 15:43:35 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.579 15:43:35 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.579 15:43:35 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.579 15:43:35 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.579 15:43:35 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:38.579 15:43:35 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:38.579 15:43:35 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.579 15:43:35 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:38.579 15:43:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.579 15:43:35 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:38.579 15:43:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:38.579 15:43:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.579 15:43:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.579 ************************************ 00:06:38.579 START TEST accel_missing_filename 00:06:38.579 ************************************ 00:06:38.579 15:43:35 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:38.579 15:43:35 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:38.579 15:43:35 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:38.579 15:43:35 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:38.579 15:43:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.579 15:43:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:38.579 15:43:35 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:38.579 15:43:35 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:38.579 15:43:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:38.579 15:43:35 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:38.579 15:43:35 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.579 15:43:35 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.579 15:43:35 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.579 15:43:35 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.579 15:43:35 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.579 15:43:35 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:38.579 15:43:35 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:38.579 [2024-07-12 15:43:35.743874] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:38.579 [2024-07-12 15:43:35.743937] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid644950 ] 00:06:38.579 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.579 [2024-07-12 15:43:35.801750] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.837 [2024-07-12 15:43:35.910129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.837 [2024-07-12 15:43:35.967067] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.837 [2024-07-12 15:43:36.044380] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:39.095 A filename is required. 00:06:39.095 15:43:36 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:39.095 15:43:36 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.095 15:43:36 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:39.095 15:43:36 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:39.095 15:43:36 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:39.095 15:43:36 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.095 00:06:39.095 real 0m0.431s 00:06:39.095 user 0m0.323s 00:06:39.095 sys 0m0.140s 00:06:39.095 15:43:36 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.095 15:43:36 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:39.095 ************************************ 00:06:39.095 END TEST accel_missing_filename 00:06:39.095 ************************************ 00:06:39.095 15:43:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.095 15:43:36 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.095 15:43:36 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:39.095 15:43:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.095 15:43:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.095 ************************************ 00:06:39.095 START TEST accel_compress_verify 00:06:39.095 ************************************ 00:06:39.095 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.095 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:39.095 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.095 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:39.095 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.095 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:39.095 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.095 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.095 15:43:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.095 15:43:36 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:39.095 15:43:36 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.095 15:43:36 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.095 15:43:36 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.095 15:43:36 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.095 15:43:36 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.095 15:43:36 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:39.095 15:43:36 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:39.095 [2024-07-12 15:43:36.224315] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:39.095 [2024-07-12 15:43:36.224378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645093 ] 00:06:39.095 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.095 [2024-07-12 15:43:36.283510] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.095 [2024-07-12 15:43:36.387162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.353 [2024-07-12 15:43:36.443959] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:39.353 [2024-07-12 15:43:36.526046] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:39.353 00:06:39.353 Compression does not support the verify option, aborting. 00:06:39.353 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:39.353 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.353 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:39.353 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:39.353 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:39.353 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.353 00:06:39.353 real 0m0.434s 00:06:39.353 user 0m0.328s 00:06:39.353 sys 0m0.140s 00:06:39.353 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.353 15:43:36 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:39.353 ************************************ 00:06:39.353 END TEST accel_compress_verify 00:06:39.353 ************************************ 00:06:39.612 15:43:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.613 15:43:36 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:39.613 15:43:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:39.613 15:43:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.613 15:43:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.613 ************************************ 00:06:39.613 START TEST accel_wrong_workload 00:06:39.613 ************************************ 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:39.613 15:43:36 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:39.613 15:43:36 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:39.613 15:43:36 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.613 15:43:36 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.613 15:43:36 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.613 15:43:36 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.613 15:43:36 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.613 15:43:36 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:39.613 15:43:36 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:39.613 Unsupported workload type: foobar 00:06:39.613 [2024-07-12 15:43:36.705751] app.c:1459:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:39.613 accel_perf options: 00:06:39.613 [-h help message] 00:06:39.613 [-q queue depth per core] 00:06:39.613 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:39.613 [-T number of threads per core 00:06:39.613 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:39.613 [-t time in seconds] 00:06:39.613 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:39.613 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:39.613 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:39.613 [-l for compress/decompress workloads, name of uncompressed input file 00:06:39.613 [-S for crc32c workload, use this seed value (default 0) 00:06:39.613 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:39.613 [-f for fill workload, use this BYTE value (default 255) 00:06:39.613 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:39.613 [-y verify result if this switch is on] 00:06:39.613 [-a tasks to allocate per core (default: same value as -q)] 00:06:39.613 Can be used to spread operations across a wider range of memory. 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.613 00:06:39.613 real 0m0.025s 00:06:39.613 user 0m0.015s 00:06:39.613 sys 0m0.010s 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.613 15:43:36 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:39.613 ************************************ 00:06:39.613 END TEST accel_wrong_workload 00:06:39.613 ************************************ 00:06:39.613 Error: writing output failed: Broken pipe 00:06:39.613 15:43:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.613 15:43:36 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:39.613 15:43:36 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:39.613 15:43:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.613 15:43:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.613 ************************************ 00:06:39.613 START TEST accel_negative_buffers 00:06:39.613 ************************************ 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:39.613 15:43:36 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:39.613 15:43:36 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:39.613 15:43:36 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.613 15:43:36 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.613 15:43:36 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.613 15:43:36 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.613 15:43:36 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.613 15:43:36 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:39.613 15:43:36 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:39.613 -x option must be non-negative. 00:06:39.613 [2024-07-12 15:43:36.776636] app.c:1459:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:39.613 accel_perf options: 00:06:39.613 [-h help message] 00:06:39.613 [-q queue depth per core] 00:06:39.613 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:39.613 [-T number of threads per core 00:06:39.613 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:39.613 [-t time in seconds] 00:06:39.613 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:39.613 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:39.613 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:39.613 [-l for compress/decompress workloads, name of uncompressed input file 00:06:39.613 [-S for crc32c workload, use this seed value (default 0) 00:06:39.613 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:39.613 [-f for fill workload, use this BYTE value (default 255) 00:06:39.613 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:39.613 [-y verify result if this switch is on] 00:06:39.613 [-a tasks to allocate per core (default: same value as -q)] 00:06:39.613 Can be used to spread operations across a wider range of memory. 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:39.613 00:06:39.613 real 0m0.024s 00:06:39.613 user 0m0.015s 00:06:39.613 sys 0m0.010s 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.613 15:43:36 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:39.613 ************************************ 00:06:39.613 END TEST accel_negative_buffers 00:06:39.613 ************************************ 00:06:39.613 Error: writing output failed: Broken pipe 00:06:39.613 15:43:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.613 15:43:36 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:39.613 15:43:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:39.613 15:43:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.613 15:43:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.613 ************************************ 00:06:39.613 START TEST accel_crc32c 00:06:39.613 ************************************ 00:06:39.613 15:43:36 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:39.613 15:43:36 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:39.613 [2024-07-12 15:43:36.844864] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:39.613 [2024-07-12 15:43:36.844924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645159 ] 00:06:39.613 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.613 [2024-07-12 15:43:36.902222] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.907 [2024-07-12 15:43:37.010979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:39.907 15:43:37 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:41.304 15:43:38 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.304 00:06:41.304 real 0m1.430s 00:06:41.304 user 0m1.300s 00:06:41.304 sys 0m0.132s 00:06:41.304 15:43:38 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.304 15:43:38 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:41.304 ************************************ 00:06:41.304 END TEST accel_crc32c 00:06:41.304 ************************************ 00:06:41.304 15:43:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.304 15:43:38 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:41.304 15:43:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:41.304 15:43:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.304 15:43:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.304 ************************************ 00:06:41.304 START TEST accel_crc32c_C2 00:06:41.304 ************************************ 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:41.304 [2024-07-12 15:43:38.326834] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:41.304 [2024-07-12 15:43:38.326897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645368 ] 00:06:41.304 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.304 [2024-07-12 15:43:38.384823] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.304 [2024-07-12 15:43:38.489534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.304 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:41.305 15:43:38 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.672 00:06:42.672 real 0m1.432s 00:06:42.672 user 0m1.299s 00:06:42.672 sys 0m0.135s 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.672 15:43:39 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:42.672 ************************************ 00:06:42.672 END TEST accel_crc32c_C2 00:06:42.672 ************************************ 00:06:42.672 15:43:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.672 15:43:39 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:42.672 15:43:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:42.672 15:43:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.672 15:43:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.672 ************************************ 00:06:42.672 START TEST accel_copy 00:06:42.672 ************************************ 00:06:42.672 15:43:39 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:42.672 15:43:39 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:42.672 [2024-07-12 15:43:39.806183] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:42.673 [2024-07-12 15:43:39.806245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645597 ] 00:06:42.673 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.673 [2024-07-12 15:43:39.863029] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.930 [2024-07-12 15:43:39.968990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:42.930 15:43:40 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:44.302 15:43:41 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.302 00:06:44.302 real 0m1.440s 00:06:44.302 user 0m1.297s 00:06:44.302 sys 0m0.144s 00:06:44.302 15:43:41 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.302 15:43:41 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:44.302 ************************************ 00:06:44.302 END TEST accel_copy 00:06:44.302 ************************************ 00:06:44.303 15:43:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.303 15:43:41 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.303 15:43:41 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:44.303 15:43:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.303 15:43:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.303 ************************************ 00:06:44.303 START TEST accel_fill 00:06:44.303 ************************************ 00:06:44.303 15:43:41 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:44.303 [2024-07-12 15:43:41.292198] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:44.303 [2024-07-12 15:43:41.292263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645750 ] 00:06:44.303 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.303 [2024-07-12 15:43:41.351088] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.303 [2024-07-12 15:43:41.457141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.303 15:43:41 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:45.676 15:43:42 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.676 00:06:45.676 real 0m1.433s 00:06:45.676 user 0m1.300s 00:06:45.676 sys 0m0.134s 00:06:45.676 15:43:42 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.676 15:43:42 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:45.676 ************************************ 00:06:45.676 END TEST accel_fill 00:06:45.676 ************************************ 00:06:45.676 15:43:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.676 15:43:42 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:45.676 15:43:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:45.676 15:43:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.676 15:43:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.676 ************************************ 00:06:45.676 START TEST accel_copy_crc32c 00:06:45.676 ************************************ 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:45.676 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:45.676 [2024-07-12 15:43:42.776532] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:45.676 [2024-07-12 15:43:42.776597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid645906 ] 00:06:45.676 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.676 [2024-07-12 15:43:42.836171] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.676 [2024-07-12 15:43:42.940903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:42 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:45.934 15:43:43 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.308 00:06:47.308 real 0m1.440s 00:06:47.308 user 0m1.310s 00:06:47.308 sys 0m0.133s 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.308 15:43:44 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:47.308 ************************************ 00:06:47.308 END TEST accel_copy_crc32c 00:06:47.308 ************************************ 00:06:47.308 15:43:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.308 15:43:44 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:47.308 15:43:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:47.308 15:43:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.308 15:43:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.308 ************************************ 00:06:47.308 START TEST accel_copy_crc32c_C2 00:06:47.308 ************************************ 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:47.308 [2024-07-12 15:43:44.259784] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:47.308 [2024-07-12 15:43:44.259846] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646184 ] 00:06:47.308 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.308 [2024-07-12 15:43:44.316036] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.308 [2024-07-12 15:43:44.419460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.308 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.309 15:43:44 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.681 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.681 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.681 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.681 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.681 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.681 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.681 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.681 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.682 00:06:48.682 real 0m1.433s 00:06:48.682 user 0m1.303s 00:06:48.682 sys 0m0.132s 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.682 15:43:45 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:48.682 ************************************ 00:06:48.682 END TEST accel_copy_crc32c_C2 00:06:48.682 ************************************ 00:06:48.682 15:43:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.682 15:43:45 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:48.682 15:43:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:48.682 15:43:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.682 15:43:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.682 ************************************ 00:06:48.682 START TEST accel_dualcast 00:06:48.682 ************************************ 00:06:48.682 15:43:45 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:48.682 [2024-07-12 15:43:45.740298] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:48.682 [2024-07-12 15:43:45.740371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646336 ] 00:06:48.682 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.682 [2024-07-12 15:43:45.798343] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.682 [2024-07-12 15:43:45.904886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.682 15:43:45 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:50.055 15:43:47 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.055 00:06:50.055 real 0m1.437s 00:06:50.055 user 0m1.302s 00:06:50.055 sys 0m0.136s 00:06:50.055 15:43:47 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.055 15:43:47 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:50.055 ************************************ 00:06:50.055 END TEST accel_dualcast 00:06:50.055 ************************************ 00:06:50.055 15:43:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.055 15:43:47 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:50.055 15:43:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:50.055 15:43:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.055 15:43:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.055 ************************************ 00:06:50.055 START TEST accel_compare 00:06:50.055 ************************************ 00:06:50.055 15:43:47 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:50.055 15:43:47 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:50.055 [2024-07-12 15:43:47.223426] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:50.055 [2024-07-12 15:43:47.223487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646497 ] 00:06:50.055 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.055 [2024-07-12 15:43:47.280217] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.312 [2024-07-12 15:43:47.386287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.312 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.313 15:43:47 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:51.684 15:43:48 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.684 00:06:51.684 real 0m1.426s 00:06:51.684 user 0m1.299s 00:06:51.684 sys 0m0.128s 00:06:51.684 15:43:48 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.684 15:43:48 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:51.684 ************************************ 00:06:51.684 END TEST accel_compare 00:06:51.684 ************************************ 00:06:51.684 15:43:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.684 15:43:48 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:51.684 15:43:48 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:51.684 15:43:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.684 15:43:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.684 ************************************ 00:06:51.684 START TEST accel_xor 00:06:51.684 ************************************ 00:06:51.684 15:43:48 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:51.684 [2024-07-12 15:43:48.697396] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:51.684 [2024-07-12 15:43:48.697457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646766 ] 00:06:51.684 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.684 [2024-07-12 15:43:48.755263] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.684 [2024-07-12 15:43:48.858472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.684 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.685 15:43:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.055 00:06:53.055 real 0m1.421s 00:06:53.055 user 0m1.300s 00:06:53.055 sys 0m0.123s 00:06:53.055 15:43:50 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.055 15:43:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:53.055 ************************************ 00:06:53.055 END TEST accel_xor 00:06:53.055 ************************************ 00:06:53.055 15:43:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.055 15:43:50 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:53.055 15:43:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:53.055 15:43:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.055 15:43:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.055 ************************************ 00:06:53.055 START TEST accel_xor 00:06:53.055 ************************************ 00:06:53.055 15:43:50 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:53.055 15:43:50 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:53.055 [2024-07-12 15:43:50.166902] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:53.055 [2024-07-12 15:43:50.166965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid646930 ] 00:06:53.055 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.055 [2024-07-12 15:43:50.225851] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.055 [2024-07-12 15:43:50.329606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.312 15:43:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:54.683 15:43:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.683 00:06:54.683 real 0m1.440s 00:06:54.683 user 0m1.302s 00:06:54.683 sys 0m0.140s 00:06:54.683 15:43:51 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.683 15:43:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:54.683 ************************************ 00:06:54.683 END TEST accel_xor 00:06:54.683 ************************************ 00:06:54.683 15:43:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.683 15:43:51 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:54.683 15:43:51 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:54.683 15:43:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.683 15:43:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.683 ************************************ 00:06:54.683 START TEST accel_dif_verify 00:06:54.683 ************************************ 00:06:54.683 15:43:51 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:54.683 [2024-07-12 15:43:51.654386] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:54.683 [2024-07-12 15:43:51.654448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647081 ] 00:06:54.683 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.683 [2024-07-12 15:43:51.712280] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.683 [2024-07-12 15:43:51.816398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.683 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.684 15:43:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:56.054 15:43:53 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.054 00:06:56.054 real 0m1.425s 00:06:56.054 user 0m1.297s 00:06:56.054 sys 0m0.131s 00:06:56.054 15:43:53 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.054 15:43:53 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:56.054 ************************************ 00:06:56.054 END TEST accel_dif_verify 00:06:56.054 ************************************ 00:06:56.054 15:43:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.054 15:43:53 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:56.054 15:43:53 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:56.054 15:43:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.054 15:43:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.054 ************************************ 00:06:56.054 START TEST accel_dif_generate 00:06:56.054 ************************************ 00:06:56.054 15:43:53 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:56.054 [2024-07-12 15:43:53.125425] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:56.054 [2024-07-12 15:43:53.125488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647312 ] 00:06:56.054 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.054 [2024-07-12 15:43:53.183489] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.054 [2024-07-12 15:43:53.288172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.054 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.311 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:56.312 15:43:53 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.243 15:43:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.243 15:43:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.243 15:43:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.243 15:43:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:57.501 15:43:54 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.501 00:06:57.501 real 0m1.433s 00:06:57.501 user 0m1.300s 00:06:57.501 sys 0m0.137s 00:06:57.501 15:43:54 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.501 15:43:54 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:57.501 ************************************ 00:06:57.501 END TEST accel_dif_generate 00:06:57.501 ************************************ 00:06:57.501 15:43:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.501 15:43:54 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:57.501 15:43:54 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:57.501 15:43:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.501 15:43:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.501 ************************************ 00:06:57.501 START TEST accel_dif_generate_copy 00:06:57.501 ************************************ 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:57.501 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:57.501 [2024-07-12 15:43:54.606329] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:57.501 [2024-07-12 15:43:54.606391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647515 ] 00:06:57.501 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.501 [2024-07-12 15:43:54.664371] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.501 [2024-07-12 15:43:54.768428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.759 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.760 15:43:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.134 00:06:59.134 real 0m1.441s 00:06:59.134 user 0m1.302s 00:06:59.134 sys 0m0.140s 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.134 15:43:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:59.134 ************************************ 00:06:59.134 END TEST accel_dif_generate_copy 00:06:59.134 ************************************ 00:06:59.134 15:43:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.134 15:43:56 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:59.134 15:43:56 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.134 15:43:56 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:59.134 15:43:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.134 15:43:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.134 ************************************ 00:06:59.134 START TEST accel_comp 00:06:59.134 ************************************ 00:06:59.134 15:43:56 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:59.134 [2024-07-12 15:43:56.096907] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:06:59.134 [2024-07-12 15:43:56.096969] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647675 ] 00:06:59.134 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.134 [2024-07-12 15:43:56.154782] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.134 [2024-07-12 15:43:56.259384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.134 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.135 15:43:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:00.507 15:43:57 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.507 00:07:00.507 real 0m1.437s 00:07:00.507 user 0m1.305s 00:07:00.507 sys 0m0.135s 00:07:00.507 15:43:57 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.507 15:43:57 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:00.507 ************************************ 00:07:00.507 END TEST accel_comp 00:07:00.507 ************************************ 00:07:00.507 15:43:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.507 15:43:57 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.507 15:43:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:00.507 15:43:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.507 15:43:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.507 ************************************ 00:07:00.507 START TEST accel_decomp 00:07:00.507 ************************************ 00:07:00.507 15:43:57 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:00.507 15:43:57 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:00.507 [2024-07-12 15:43:57.579757] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:07:00.507 [2024-07-12 15:43:57.579823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid647846 ] 00:07:00.507 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.507 [2024-07-12 15:43:57.641295] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.507 [2024-07-12 15:43:57.746695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:00.765 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:00.766 15:43:57 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.139 15:43:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.139 15:43:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.139 15:43:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.139 15:43:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.139 15:43:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.140 15:43:59 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.140 00:07:02.140 real 0m1.444s 00:07:02.140 user 0m1.308s 00:07:02.140 sys 0m0.138s 00:07:02.140 15:43:59 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.140 15:43:59 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:02.140 ************************************ 00:07:02.140 END TEST accel_decomp 00:07:02.140 ************************************ 00:07:02.140 15:43:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.140 15:43:59 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:02.140 15:43:59 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:02.140 15:43:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.140 15:43:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.140 ************************************ 00:07:02.140 START TEST accel_decomp_full 00:07:02.140 ************************************ 00:07:02.140 15:43:59 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:02.140 [2024-07-12 15:43:59.070515] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:07:02.140 [2024-07-12 15:43:59.070577] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648105 ] 00:07:02.140 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.140 [2024-07-12 15:43:59.128833] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.140 [2024-07-12 15:43:59.229809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.140 15:43:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.515 15:44:00 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.515 00:07:03.515 real 0m1.428s 00:07:03.515 user 0m1.302s 00:07:03.515 sys 0m0.128s 00:07:03.515 15:44:00 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.515 15:44:00 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:03.515 ************************************ 00:07:03.515 END TEST accel_decomp_full 00:07:03.515 ************************************ 00:07:03.515 15:44:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.515 15:44:00 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:03.515 15:44:00 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:03.515 15:44:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.515 15:44:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.515 ************************************ 00:07:03.515 START TEST accel_decomp_mcore 00:07:03.515 ************************************ 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:03.515 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:03.515 [2024-07-12 15:44:00.550446] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:07:03.515 [2024-07-12 15:44:00.550509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648261 ] 00:07:03.515 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.515 [2024-07-12 15:44:00.609198] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.515 [2024-07-12 15:44:00.714212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.516 [2024-07-12 15:44:00.714275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.516 [2024-07-12 15:44:00.714340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.516 [2024-07-12 15:44:00.714343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:03.516 15:44:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.887 00:07:04.887 real 0m1.458s 00:07:04.887 user 0m4.774s 00:07:04.887 sys 0m0.146s 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.887 15:44:01 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:04.887 ************************************ 00:07:04.887 END TEST accel_decomp_mcore 00:07:04.887 ************************************ 00:07:04.887 15:44:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.887 15:44:02 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.887 15:44:02 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:04.887 15:44:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.887 15:44:02 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.887 ************************************ 00:07:04.887 START TEST accel_decomp_full_mcore 00:07:04.887 ************************************ 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:04.887 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:04.887 [2024-07-12 15:44:02.061177] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:07:04.887 [2024-07-12 15:44:02.061242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648424 ] 00:07:04.887 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.887 [2024-07-12 15:44:02.124215] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.146 [2024-07-12 15:44:02.239641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.146 [2024-07-12 15:44:02.239702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.146 [2024-07-12 15:44:02.239823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.146 [2024-07-12 15:44:02.239827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.146 15:44:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.517 00:07:06.517 real 0m1.482s 00:07:06.517 user 0m4.817s 00:07:06.517 sys 0m0.167s 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.517 15:44:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:06.517 ************************************ 00:07:06.517 END TEST accel_decomp_full_mcore 00:07:06.517 ************************************ 00:07:06.517 15:44:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.517 15:44:03 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:06.517 15:44:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:06.517 15:44:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.517 15:44:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.517 ************************************ 00:07:06.517 START TEST accel_decomp_mthread 00:07:06.517 ************************************ 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:06.517 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:06.517 [2024-07-12 15:44:03.588961] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:07:06.517 [2024-07-12 15:44:03.589023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648697 ] 00:07:06.517 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.517 [2024-07-12 15:44:03.646792] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.517 [2024-07-12 15:44:03.764415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.775 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.776 15:44:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.767 00:07:07.767 real 0m1.450s 00:07:07.767 user 0m1.313s 00:07:07.767 sys 0m0.144s 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.767 15:44:05 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:07.767 ************************************ 00:07:07.767 END TEST accel_decomp_mthread 00:07:07.767 ************************************ 00:07:08.032 15:44:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.032 15:44:05 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.032 15:44:05 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:08.032 15:44:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.032 15:44:05 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.032 ************************************ 00:07:08.032 START TEST accel_decomp_full_mthread 00:07:08.032 ************************************ 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:08.032 [2024-07-12 15:44:05.086194] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:07:08.032 [2024-07-12 15:44:05.086259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid648863 ] 00:07:08.032 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.032 [2024-07-12 15:44:05.145199] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.032 [2024-07-12 15:44:05.248962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.032 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.033 15:44:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.405 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.406 00:07:09.406 real 0m1.465s 00:07:09.406 user 0m1.338s 00:07:09.406 sys 0m0.134s 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.406 15:44:06 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:09.406 ************************************ 00:07:09.406 END TEST accel_decomp_full_mthread 00:07:09.406 ************************************ 00:07:09.406 15:44:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.406 15:44:06 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:09.406 15:44:06 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:09.406 15:44:06 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:09.406 15:44:06 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:09.406 15:44:06 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.406 15:44:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.406 15:44:06 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.406 15:44:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.406 15:44:06 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.406 15:44:06 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.406 15:44:06 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.406 15:44:06 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:09.406 15:44:06 accel -- accel/accel.sh@41 -- # jq -r . 00:07:09.406 ************************************ 00:07:09.406 START TEST accel_dif_functional_tests 00:07:09.406 ************************************ 00:07:09.406 15:44:06 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:09.406 [2024-07-12 15:44:06.624121] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:07:09.406 [2024-07-12 15:44:06.624184] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid649028 ] 00:07:09.406 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.406 [2024-07-12 15:44:06.683499] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.664 [2024-07-12 15:44:06.791652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.664 [2024-07-12 15:44:06.791719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.664 [2024-07-12 15:44:06.791721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.664 00:07:09.664 00:07:09.664 CUnit - A unit testing framework for C - Version 2.1-3 00:07:09.664 http://cunit.sourceforge.net/ 00:07:09.664 00:07:09.664 00:07:09.664 Suite: accel_dif 00:07:09.664 Test: verify: DIF generated, GUARD check ...passed 00:07:09.664 Test: verify: DIF generated, APPTAG check ...passed 00:07:09.664 Test: verify: DIF generated, REFTAG check ...passed 00:07:09.664 Test: verify: DIF not generated, GUARD check ...[2024-07-12 15:44:06.888214] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:09.664 passed 00:07:09.664 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 15:44:06.888281] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:09.664 passed 00:07:09.664 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 15:44:06.888312] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:09.664 passed 00:07:09.664 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:09.665 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 15:44:06.888373] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:09.665 passed 00:07:09.665 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:09.665 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:09.665 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:09.665 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 15:44:06.888500] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:09.665 passed 00:07:09.665 Test: verify copy: DIF generated, GUARD check ...passed 00:07:09.665 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:09.665 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:09.665 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 15:44:06.888656] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:09.665 passed 00:07:09.665 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 15:44:06.888691] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:09.665 passed 00:07:09.665 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 15:44:06.888748] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:09.665 passed 00:07:09.665 Test: generate copy: DIF generated, GUARD check ...passed 00:07:09.665 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:09.665 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:09.665 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:09.665 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:09.665 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:09.665 Test: generate copy: iovecs-len validate ...[2024-07-12 15:44:06.888981] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:09.665 passed 00:07:09.665 Test: generate copy: buffer alignment validate ...passed 00:07:09.665 00:07:09.665 Run Summary: Type Total Ran Passed Failed Inactive 00:07:09.665 suites 1 1 n/a 0 0 00:07:09.665 tests 26 26 26 0 0 00:07:09.665 asserts 115 115 115 0 n/a 00:07:09.665 00:07:09.665 Elapsed time = 0.003 seconds 00:07:09.923 00:07:09.923 real 0m0.555s 00:07:09.923 user 0m0.850s 00:07:09.923 sys 0m0.177s 00:07:09.923 15:44:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.923 15:44:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:09.923 ************************************ 00:07:09.923 END TEST accel_dif_functional_tests 00:07:09.923 ************************************ 00:07:09.923 15:44:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.923 00:07:09.923 real 0m32.561s 00:07:09.923 user 0m36.189s 00:07:09.923 sys 0m4.401s 00:07:09.923 15:44:07 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.923 15:44:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.923 ************************************ 00:07:09.923 END TEST accel 00:07:09.923 ************************************ 00:07:09.923 15:44:07 -- common/autotest_common.sh@1142 -- # return 0 00:07:09.923 15:44:07 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:09.923 15:44:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.923 15:44:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.923 15:44:07 -- common/autotest_common.sh@10 -- # set +x 00:07:09.923 ************************************ 00:07:09.923 START TEST accel_rpc 00:07:09.923 ************************************ 00:07:09.923 15:44:07 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:10.181 * Looking for test storage... 00:07:10.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:10.181 15:44:07 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:10.181 15:44:07 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=649207 00:07:10.181 15:44:07 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:10.181 15:44:07 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 649207 00:07:10.181 15:44:07 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 649207 ']' 00:07:10.181 15:44:07 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.181 15:44:07 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.181 15:44:07 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.181 15:44:07 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.181 15:44:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.181 [2024-07-12 15:44:07.293868] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:07:10.181 [2024-07-12 15:44:07.293946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid649207 ] 00:07:10.181 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.181 [2024-07-12 15:44:07.349616] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.181 [2024-07-12 15:44:07.453334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.439 15:44:07 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.439 15:44:07 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:10.439 15:44:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:10.439 15:44:07 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:10.439 15:44:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:10.439 15:44:07 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:10.439 15:44:07 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:10.439 15:44:07 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.439 15:44:07 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.439 15:44:07 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.439 ************************************ 00:07:10.439 START TEST accel_assign_opcode 00:07:10.439 ************************************ 00:07:10.440 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:10.440 15:44:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:10.440 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.440 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.440 [2024-07-12 15:44:07.526001] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:10.440 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.440 15:44:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:10.440 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.440 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.440 [2024-07-12 15:44:07.534009] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:10.440 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.440 15:44:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:10.440 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.440 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.697 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.697 15:44:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:10.697 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.697 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.697 15:44:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:10.697 15:44:07 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:10.697 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.697 software 00:07:10.697 00:07:10.697 real 0m0.281s 00:07:10.697 user 0m0.033s 00:07:10.697 sys 0m0.010s 00:07:10.697 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.697 15:44:07 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.697 ************************************ 00:07:10.697 END TEST accel_assign_opcode 00:07:10.697 ************************************ 00:07:10.697 15:44:07 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:10.697 15:44:07 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 649207 00:07:10.697 15:44:07 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 649207 ']' 00:07:10.697 15:44:07 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 649207 00:07:10.697 15:44:07 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:10.697 15:44:07 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.697 15:44:07 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 649207 00:07:10.697 15:44:07 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:10.697 15:44:07 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:10.697 15:44:07 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 649207' 00:07:10.697 killing process with pid 649207 00:07:10.697 15:44:07 accel_rpc -- common/autotest_common.sh@967 -- # kill 649207 00:07:10.697 15:44:07 accel_rpc -- common/autotest_common.sh@972 -- # wait 649207 00:07:11.263 00:07:11.263 real 0m1.078s 00:07:11.263 user 0m1.025s 00:07:11.263 sys 0m0.397s 00:07:11.263 15:44:08 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.263 15:44:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.263 ************************************ 00:07:11.263 END TEST accel_rpc 00:07:11.263 ************************************ 00:07:11.263 15:44:08 -- common/autotest_common.sh@1142 -- # return 0 00:07:11.263 15:44:08 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:11.263 15:44:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.263 15:44:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.263 15:44:08 -- common/autotest_common.sh@10 -- # set +x 00:07:11.263 ************************************ 00:07:11.263 START TEST app_cmdline 00:07:11.263 ************************************ 00:07:11.263 15:44:08 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:11.263 * Looking for test storage... 00:07:11.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:11.263 15:44:08 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:11.263 15:44:08 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=649413 00:07:11.263 15:44:08 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:11.263 15:44:08 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 649413 00:07:11.263 15:44:08 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 649413 ']' 00:07:11.263 15:44:08 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.263 15:44:08 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.263 15:44:08 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.263 15:44:08 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.263 15:44:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.263 [2024-07-12 15:44:08.429682] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:07:11.263 [2024-07-12 15:44:08.429804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid649413 ] 00:07:11.263 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.263 [2024-07-12 15:44:08.487692] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.522 [2024-07-12 15:44:08.594417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.780 15:44:08 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.780 15:44:08 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:11.780 15:44:08 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:11.780 { 00:07:11.780 "version": "SPDK v24.09-pre git sha1 25161080d", 00:07:11.780 "fields": { 00:07:11.780 "major": 24, 00:07:11.780 "minor": 9, 00:07:11.780 "patch": 0, 00:07:11.780 "suffix": "-pre", 00:07:11.780 "commit": "25161080d" 00:07:11.780 } 00:07:11.780 } 00:07:12.038 15:44:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:12.038 15:44:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:12.038 15:44:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:12.038 15:44:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:12.038 15:44:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:12.038 15:44:09 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.038 15:44:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:12.038 15:44:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.038 15:44:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:12.038 15:44:09 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.038 15:44:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:12.039 15:44:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:12.039 15:44:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.039 15:44:09 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:12.039 15:44:09 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.039 15:44:09 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.039 15:44:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.039 15:44:09 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.039 15:44:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.039 15:44:09 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.039 15:44:09 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.039 15:44:09 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:12.039 15:44:09 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:12.039 15:44:09 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.296 request: 00:07:12.296 { 00:07:12.296 "method": "env_dpdk_get_mem_stats", 00:07:12.296 "req_id": 1 00:07:12.296 } 00:07:12.296 Got JSON-RPC error response 00:07:12.296 response: 00:07:12.296 { 00:07:12.296 "code": -32601, 00:07:12.296 "message": "Method not found" 00:07:12.296 } 00:07:12.296 15:44:09 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:12.296 15:44:09 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:12.296 15:44:09 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:12.297 15:44:09 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:12.297 15:44:09 app_cmdline -- app/cmdline.sh@1 -- # killprocess 649413 00:07:12.297 15:44:09 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 649413 ']' 00:07:12.297 15:44:09 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 649413 00:07:12.297 15:44:09 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:12.297 15:44:09 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:12.297 15:44:09 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 649413 00:07:12.297 15:44:09 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:12.297 15:44:09 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:12.297 15:44:09 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 649413' 00:07:12.297 killing process with pid 649413 00:07:12.297 15:44:09 app_cmdline -- common/autotest_common.sh@967 -- # kill 649413 00:07:12.297 15:44:09 app_cmdline -- common/autotest_common.sh@972 -- # wait 649413 00:07:12.554 00:07:12.555 real 0m1.487s 00:07:12.555 user 0m1.806s 00:07:12.555 sys 0m0.451s 00:07:12.555 15:44:09 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.555 15:44:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.555 ************************************ 00:07:12.555 END TEST app_cmdline 00:07:12.555 ************************************ 00:07:12.555 15:44:09 -- common/autotest_common.sh@1142 -- # return 0 00:07:12.555 15:44:09 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:12.555 15:44:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:12.555 15:44:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.555 15:44:09 -- common/autotest_common.sh@10 -- # set +x 00:07:12.813 ************************************ 00:07:12.813 START TEST version 00:07:12.813 ************************************ 00:07:12.813 15:44:09 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:12.813 * Looking for test storage... 00:07:12.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:12.813 15:44:09 version -- app/version.sh@17 -- # get_header_version major 00:07:12.813 15:44:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.813 15:44:09 version -- app/version.sh@14 -- # cut -f2 00:07:12.813 15:44:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.813 15:44:09 version -- app/version.sh@17 -- # major=24 00:07:12.813 15:44:09 version -- app/version.sh@18 -- # get_header_version minor 00:07:12.813 15:44:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.813 15:44:09 version -- app/version.sh@14 -- # cut -f2 00:07:12.813 15:44:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.813 15:44:09 version -- app/version.sh@18 -- # minor=9 00:07:12.813 15:44:09 version -- app/version.sh@19 -- # get_header_version patch 00:07:12.813 15:44:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.813 15:44:09 version -- app/version.sh@14 -- # cut -f2 00:07:12.813 15:44:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.813 15:44:09 version -- app/version.sh@19 -- # patch=0 00:07:12.813 15:44:09 version -- app/version.sh@20 -- # get_header_version suffix 00:07:12.813 15:44:09 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.813 15:44:09 version -- app/version.sh@14 -- # cut -f2 00:07:12.813 15:44:09 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.813 15:44:09 version -- app/version.sh@20 -- # suffix=-pre 00:07:12.813 15:44:09 version -- app/version.sh@22 -- # version=24.9 00:07:12.813 15:44:09 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:12.813 15:44:09 version -- app/version.sh@28 -- # version=24.9rc0 00:07:12.813 15:44:09 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:12.813 15:44:09 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:12.813 15:44:09 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:12.813 15:44:09 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:12.813 00:07:12.813 real 0m0.107s 00:07:12.813 user 0m0.055s 00:07:12.813 sys 0m0.074s 00:07:12.813 15:44:09 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.813 15:44:09 version -- common/autotest_common.sh@10 -- # set +x 00:07:12.813 ************************************ 00:07:12.813 END TEST version 00:07:12.813 ************************************ 00:07:12.813 15:44:09 -- common/autotest_common.sh@1142 -- # return 0 00:07:12.813 15:44:09 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:12.813 15:44:09 -- spdk/autotest.sh@198 -- # uname -s 00:07:12.813 15:44:09 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:12.813 15:44:09 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:12.813 15:44:09 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:12.813 15:44:09 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:12.813 15:44:09 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:12.813 15:44:09 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:12.813 15:44:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:12.813 15:44:10 -- common/autotest_common.sh@10 -- # set +x 00:07:12.813 15:44:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:12.813 15:44:10 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:12.813 15:44:10 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:12.813 15:44:10 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:12.813 15:44:10 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:12.813 15:44:10 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:12.813 15:44:10 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:12.813 15:44:10 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:12.813 15:44:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.813 15:44:10 -- common/autotest_common.sh@10 -- # set +x 00:07:12.813 ************************************ 00:07:12.813 START TEST nvmf_tcp 00:07:12.813 ************************************ 00:07:12.813 15:44:10 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:12.813 * Looking for test storage... 00:07:12.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.813 15:44:10 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.072 15:44:10 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.072 15:44:10 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.072 15:44:10 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.072 15:44:10 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.072 15:44:10 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.072 15:44:10 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.072 15:44:10 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:13.072 15:44:10 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:13.072 15:44:10 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:13.072 15:44:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:13.072 15:44:10 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:13.072 15:44:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.072 15:44:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.072 15:44:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:13.072 ************************************ 00:07:13.072 START TEST nvmf_example 00:07:13.072 ************************************ 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:13.072 * Looking for test storage... 00:07:13.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:13.072 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:13.073 15:44:10 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.607 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.607 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:15.607 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:15.607 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:15.607 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:15.607 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:15.608 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:15.608 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:15.608 Found net devices under 0000:84:00.0: cvl_0_0 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:15.608 Found net devices under 0000:84:00.1: cvl_0_1 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:15.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:07:15.608 00:07:15.608 --- 10.0.0.2 ping statistics --- 00:07:15.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.608 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:07:15.608 00:07:15.608 --- 10.0.0.1 ping statistics --- 00:07:15.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.608 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=651446 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 651446 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 651446 ']' 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.608 15:44:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:15.608 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:16.541 15:44:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:16.541 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.742 Initializing NVMe Controllers 00:07:28.742 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:28.742 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:28.742 Initialization complete. Launching workers. 00:07:28.742 ======================================================== 00:07:28.742 Latency(us) 00:07:28.742 Device Information : IOPS MiB/s Average min max 00:07:28.742 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14738.08 57.57 4338.71 850.14 23316.36 00:07:28.742 ======================================================== 00:07:28.742 Total : 14738.08 57.57 4338.71 850.14 23316.36 00:07:28.742 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:28.742 rmmod nvme_tcp 00:07:28.742 rmmod nvme_fabrics 00:07:28.742 rmmod nvme_keyring 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 651446 ']' 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 651446 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 651446 ']' 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 651446 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 651446 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 651446' 00:07:28.742 killing process with pid 651446 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 651446 00:07:28.742 15:44:23 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 651446 00:07:28.742 nvmf threads initialize successfully 00:07:28.743 bdev subsystem init successfully 00:07:28.743 created a nvmf target service 00:07:28.743 create targets's poll groups done 00:07:28.743 all subsystems of target started 00:07:28.743 nvmf target is running 00:07:28.743 all subsystems of target stopped 00:07:28.743 destroy targets's poll groups done 00:07:28.743 destroyed the nvmf target service 00:07:28.743 bdev subsystem finish successfully 00:07:28.743 nvmf threads destroy successfully 00:07:28.743 15:44:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:28.743 15:44:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:28.743 15:44:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:28.743 15:44:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.743 15:44:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.743 15:44:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.743 15:44:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.743 15:44:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.001 15:44:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:29.001 15:44:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:29.001 15:44:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:29.001 15:44:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.001 00:07:29.001 real 0m16.142s 00:07:29.001 user 0m45.263s 00:07:29.001 sys 0m3.648s 00:07:29.001 15:44:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.001 15:44:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.001 ************************************ 00:07:29.001 END TEST nvmf_example 00:07:29.001 ************************************ 00:07:29.261 15:44:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:29.261 15:44:26 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:29.261 15:44:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:29.261 15:44:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.261 15:44:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.261 ************************************ 00:07:29.261 START TEST nvmf_filesystem 00:07:29.261 ************************************ 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:29.261 * Looking for test storage... 00:07:29.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:29.261 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:29.262 #define SPDK_CONFIG_H 00:07:29.262 #define SPDK_CONFIG_APPS 1 00:07:29.262 #define SPDK_CONFIG_ARCH native 00:07:29.262 #undef SPDK_CONFIG_ASAN 00:07:29.262 #undef SPDK_CONFIG_AVAHI 00:07:29.262 #undef SPDK_CONFIG_CET 00:07:29.262 #define SPDK_CONFIG_COVERAGE 1 00:07:29.262 #define SPDK_CONFIG_CROSS_PREFIX 00:07:29.262 #undef SPDK_CONFIG_CRYPTO 00:07:29.262 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:29.262 #undef SPDK_CONFIG_CUSTOMOCF 00:07:29.262 #undef SPDK_CONFIG_DAOS 00:07:29.262 #define SPDK_CONFIG_DAOS_DIR 00:07:29.262 #define SPDK_CONFIG_DEBUG 1 00:07:29.262 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:29.262 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:29.262 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:29.262 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:29.262 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:29.262 #undef SPDK_CONFIG_DPDK_UADK 00:07:29.262 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:29.262 #define SPDK_CONFIG_EXAMPLES 1 00:07:29.262 #undef SPDK_CONFIG_FC 00:07:29.262 #define SPDK_CONFIG_FC_PATH 00:07:29.262 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:29.262 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:29.262 #undef SPDK_CONFIG_FUSE 00:07:29.262 #undef SPDK_CONFIG_FUZZER 00:07:29.262 #define SPDK_CONFIG_FUZZER_LIB 00:07:29.262 #undef SPDK_CONFIG_GOLANG 00:07:29.262 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:29.262 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:29.262 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:29.262 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:29.262 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:29.262 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:29.262 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:29.262 #define SPDK_CONFIG_IDXD 1 00:07:29.262 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:29.262 #undef SPDK_CONFIG_IPSEC_MB 00:07:29.262 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:29.262 #define SPDK_CONFIG_ISAL 1 00:07:29.262 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:29.262 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:29.262 #define SPDK_CONFIG_LIBDIR 00:07:29.262 #undef SPDK_CONFIG_LTO 00:07:29.262 #define SPDK_CONFIG_MAX_LCORES 128 00:07:29.262 #define SPDK_CONFIG_NVME_CUSE 1 00:07:29.262 #undef SPDK_CONFIG_OCF 00:07:29.262 #define SPDK_CONFIG_OCF_PATH 00:07:29.262 #define SPDK_CONFIG_OPENSSL_PATH 00:07:29.262 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:29.262 #define SPDK_CONFIG_PGO_DIR 00:07:29.262 #undef SPDK_CONFIG_PGO_USE 00:07:29.262 #define SPDK_CONFIG_PREFIX /usr/local 00:07:29.262 #undef SPDK_CONFIG_RAID5F 00:07:29.262 #undef SPDK_CONFIG_RBD 00:07:29.262 #define SPDK_CONFIG_RDMA 1 00:07:29.262 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:29.262 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:29.262 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:29.262 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:29.262 #define SPDK_CONFIG_SHARED 1 00:07:29.262 #undef SPDK_CONFIG_SMA 00:07:29.262 #define SPDK_CONFIG_TESTS 1 00:07:29.262 #undef SPDK_CONFIG_TSAN 00:07:29.262 #define SPDK_CONFIG_UBLK 1 00:07:29.262 #define SPDK_CONFIG_UBSAN 1 00:07:29.262 #undef SPDK_CONFIG_UNIT_TESTS 00:07:29.262 #undef SPDK_CONFIG_URING 00:07:29.262 #define SPDK_CONFIG_URING_PATH 00:07:29.262 #undef SPDK_CONFIG_URING_ZNS 00:07:29.262 #undef SPDK_CONFIG_USDT 00:07:29.262 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:29.262 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:29.262 #define SPDK_CONFIG_VFIO_USER 1 00:07:29.262 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:29.262 #define SPDK_CONFIG_VHOST 1 00:07:29.262 #define SPDK_CONFIG_VIRTIO 1 00:07:29.262 #undef SPDK_CONFIG_VTUNE 00:07:29.262 #define SPDK_CONFIG_VTUNE_DIR 00:07:29.262 #define SPDK_CONFIG_WERROR 1 00:07:29.262 #define SPDK_CONFIG_WPDK_DIR 00:07:29.262 #undef SPDK_CONFIG_XNVME 00:07:29.262 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:29.262 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:29.263 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 653154 ]] 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 653154 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.xkhEtx 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.xkhEtx/tests/target /tmp/spdk.xkhEtx 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=949354496 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4335075328 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39548739584 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=45083312128 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5534572544 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22538280960 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541656064 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9007878144 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9016664064 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8785920 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=22541127680 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=22541656064 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=528384 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4508323840 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4508327936 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:29.264 * Looking for test storage... 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=39548739584 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7749165056 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:29.264 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.265 15:44:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:31.790 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:31.790 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:31.790 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:31.791 Found net devices under 0000:84:00.0: cvl_0_0 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:31.791 Found net devices under 0000:84:00.1: cvl_0_1 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:31.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:07:31.791 00:07:31.791 --- 10.0.0.2 ping statistics --- 00:07:31.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.791 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:07:31.791 00:07:31.791 --- 10.0.0.1 ping statistics --- 00:07:31.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.791 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:31.791 ************************************ 00:07:31.791 START TEST nvmf_filesystem_no_in_capsule 00:07:31.791 ************************************ 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=654799 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 654799 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 654799 ']' 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.791 15:44:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.791 [2024-07-12 15:44:28.913634] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:07:31.792 [2024-07-12 15:44:28.913716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.792 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.792 [2024-07-12 15:44:28.978007] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:32.050 [2024-07-12 15:44:29.093208] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:32.050 [2024-07-12 15:44:29.093275] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:32.050 [2024-07-12 15:44:29.093289] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:32.050 [2024-07-12 15:44:29.093300] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:32.050 [2024-07-12 15:44:29.093309] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:32.050 [2024-07-12 15:44:29.093388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.050 [2024-07-12 15:44:29.093453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.050 [2024-07-12 15:44:29.093518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.050 [2024-07-12 15:44:29.093520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.050 [2024-07-12 15:44:29.255650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.050 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.308 Malloc1 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.308 [2024-07-12 15:44:29.439406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.308 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:32.308 { 00:07:32.308 "name": "Malloc1", 00:07:32.308 "aliases": [ 00:07:32.308 "398b2a2c-6c1a-41d0-90df-b751731acb21" 00:07:32.308 ], 00:07:32.308 "product_name": "Malloc disk", 00:07:32.308 "block_size": 512, 00:07:32.308 "num_blocks": 1048576, 00:07:32.308 "uuid": "398b2a2c-6c1a-41d0-90df-b751731acb21", 00:07:32.308 "assigned_rate_limits": { 00:07:32.308 "rw_ios_per_sec": 0, 00:07:32.308 "rw_mbytes_per_sec": 0, 00:07:32.308 "r_mbytes_per_sec": 0, 00:07:32.308 "w_mbytes_per_sec": 0 00:07:32.308 }, 00:07:32.308 "claimed": true, 00:07:32.308 "claim_type": "exclusive_write", 00:07:32.308 "zoned": false, 00:07:32.308 "supported_io_types": { 00:07:32.308 "read": true, 00:07:32.308 "write": true, 00:07:32.308 "unmap": true, 00:07:32.308 "flush": true, 00:07:32.308 "reset": true, 00:07:32.308 "nvme_admin": false, 00:07:32.308 "nvme_io": false, 00:07:32.308 "nvme_io_md": false, 00:07:32.308 "write_zeroes": true, 00:07:32.308 "zcopy": true, 00:07:32.308 "get_zone_info": false, 00:07:32.308 "zone_management": false, 00:07:32.308 "zone_append": false, 00:07:32.308 "compare": false, 00:07:32.308 "compare_and_write": false, 00:07:32.308 "abort": true, 00:07:32.308 "seek_hole": false, 00:07:32.308 "seek_data": false, 00:07:32.308 "copy": true, 00:07:32.308 "nvme_iov_md": false 00:07:32.308 }, 00:07:32.308 "memory_domains": [ 00:07:32.308 { 00:07:32.308 "dma_device_id": "system", 00:07:32.308 "dma_device_type": 1 00:07:32.308 }, 00:07:32.308 { 00:07:32.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.308 "dma_device_type": 2 00:07:32.308 } 00:07:32.308 ], 00:07:32.308 "driver_specific": {} 00:07:32.308 } 00:07:32.308 ]' 00:07:32.309 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:32.309 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:32.309 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:32.309 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:32.309 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:32.309 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:32.309 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:32.309 15:44:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:33.263 15:44:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:33.263 15:44:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:33.263 15:44:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:33.263 15:44:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:33.263 15:44:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:35.177 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:35.434 15:44:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:36.363 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.621 ************************************ 00:07:36.621 START TEST filesystem_ext4 00:07:36.621 ************************************ 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:36.621 mke2fs 1.46.5 (30-Dec-2021) 00:07:36.621 Discarding device blocks: 0/522240 done 00:07:36.621 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:36.621 Filesystem UUID: 0501d2d5-55db-46eb-bbbe-4eae51895f3f 00:07:36.621 Superblock backups stored on blocks: 00:07:36.621 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:36.621 00:07:36.621 Allocating group tables: 0/64 done 00:07:36.621 Writing inode tables: 0/64 done 00:07:36.621 Creating journal (8192 blocks): done 00:07:36.621 Writing superblocks and filesystem accounting information: 0/64 done 00:07:36.621 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:36.621 15:44:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 654799 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:37.554 00:07:37.554 real 0m1.065s 00:07:37.554 user 0m0.021s 00:07:37.554 sys 0m0.056s 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:37.554 ************************************ 00:07:37.554 END TEST filesystem_ext4 00:07:37.554 ************************************ 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:37.554 ************************************ 00:07:37.554 START TEST filesystem_btrfs 00:07:37.554 ************************************ 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:37.554 15:44:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:38.120 btrfs-progs v6.6.2 00:07:38.120 See https://btrfs.readthedocs.io for more information. 00:07:38.120 00:07:38.120 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:38.120 NOTE: several default settings have changed in version 5.15, please make sure 00:07:38.120 this does not affect your deployments: 00:07:38.120 - DUP for metadata (-m dup) 00:07:38.120 - enabled no-holes (-O no-holes) 00:07:38.120 - enabled free-space-tree (-R free-space-tree) 00:07:38.120 00:07:38.120 Label: (null) 00:07:38.120 UUID: 6c24e443-532d-4201-b98c-c386e3bd4572 00:07:38.120 Node size: 16384 00:07:38.120 Sector size: 4096 00:07:38.120 Filesystem size: 510.00MiB 00:07:38.120 Block group profiles: 00:07:38.120 Data: single 8.00MiB 00:07:38.120 Metadata: DUP 32.00MiB 00:07:38.120 System: DUP 8.00MiB 00:07:38.120 SSD detected: yes 00:07:38.120 Zoned device: no 00:07:38.120 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:38.120 Runtime features: free-space-tree 00:07:38.120 Checksum: crc32c 00:07:38.120 Number of devices: 1 00:07:38.120 Devices: 00:07:38.120 ID SIZE PATH 00:07:38.120 1 510.00MiB /dev/nvme0n1p1 00:07:38.120 00:07:38.120 15:44:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:38.120 15:44:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 654799 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:39.052 00:07:39.052 real 0m1.267s 00:07:39.052 user 0m0.027s 00:07:39.052 sys 0m0.113s 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:39.052 ************************************ 00:07:39.052 END TEST filesystem_btrfs 00:07:39.052 ************************************ 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.052 ************************************ 00:07:39.052 START TEST filesystem_xfs 00:07:39.052 ************************************ 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:39.052 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:39.052 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:39.052 = sectsz=512 attr=2, projid32bit=1 00:07:39.052 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:39.052 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:39.052 data = bsize=4096 blocks=130560, imaxpct=25 00:07:39.052 = sunit=0 swidth=0 blks 00:07:39.052 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:39.052 log =internal log bsize=4096 blocks=16384, version=2 00:07:39.052 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:39.052 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:39.983 Discarding blocks...Done. 00:07:39.983 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:39.983 15:44:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 654799 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:41.879 00:07:41.879 real 0m2.832s 00:07:41.879 user 0m0.017s 00:07:41.879 sys 0m0.060s 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:41.879 ************************************ 00:07:41.879 END TEST filesystem_xfs 00:07:41.879 ************************************ 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:41.879 15:44:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:41.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 654799 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 654799 ']' 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 654799 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 654799 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 654799' 00:07:41.879 killing process with pid 654799 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 654799 00:07:41.879 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 654799 00:07:42.445 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:42.445 00:07:42.445 real 0m10.769s 00:07:42.445 user 0m41.105s 00:07:42.445 sys 0m1.719s 00:07:42.445 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.445 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.445 ************************************ 00:07:42.445 END TEST nvmf_filesystem_no_in_capsule 00:07:42.445 ************************************ 00:07:42.445 15:44:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:42.445 15:44:39 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:42.445 15:44:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:42.445 15:44:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.445 15:44:39 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.445 ************************************ 00:07:42.446 START TEST nvmf_filesystem_in_capsule 00:07:42.446 ************************************ 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=656232 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 656232 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 656232 ']' 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.446 15:44:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.446 [2024-07-12 15:44:39.738159] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:07:42.446 [2024-07-12 15:44:39.738230] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:42.704 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.704 [2024-07-12 15:44:39.806627] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.704 [2024-07-12 15:44:39.911385] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.704 [2024-07-12 15:44:39.911439] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.704 [2024-07-12 15:44:39.911451] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.704 [2024-07-12 15:44:39.911463] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.704 [2024-07-12 15:44:39.911472] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.704 [2024-07-12 15:44:39.911577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.704 [2024-07-12 15:44:39.911636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.704 [2024-07-12 15:44:39.911702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.704 [2024-07-12 15:44:39.911699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.994 [2024-07-12 15:44:40.070083] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.994 Malloc1 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.994 [2024-07-12 15:44:40.240972] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:42.994 { 00:07:42.994 "name": "Malloc1", 00:07:42.994 "aliases": [ 00:07:42.994 "eed45443-fa3f-4bef-bb6f-8d277bc9268c" 00:07:42.994 ], 00:07:42.994 "product_name": "Malloc disk", 00:07:42.994 "block_size": 512, 00:07:42.994 "num_blocks": 1048576, 00:07:42.994 "uuid": "eed45443-fa3f-4bef-bb6f-8d277bc9268c", 00:07:42.994 "assigned_rate_limits": { 00:07:42.994 "rw_ios_per_sec": 0, 00:07:42.994 "rw_mbytes_per_sec": 0, 00:07:42.994 "r_mbytes_per_sec": 0, 00:07:42.994 "w_mbytes_per_sec": 0 00:07:42.994 }, 00:07:42.994 "claimed": true, 00:07:42.994 "claim_type": "exclusive_write", 00:07:42.994 "zoned": false, 00:07:42.994 "supported_io_types": { 00:07:42.994 "read": true, 00:07:42.994 "write": true, 00:07:42.994 "unmap": true, 00:07:42.994 "flush": true, 00:07:42.994 "reset": true, 00:07:42.994 "nvme_admin": false, 00:07:42.994 "nvme_io": false, 00:07:42.994 "nvme_io_md": false, 00:07:42.994 "write_zeroes": true, 00:07:42.994 "zcopy": true, 00:07:42.994 "get_zone_info": false, 00:07:42.994 "zone_management": false, 00:07:42.994 "zone_append": false, 00:07:42.994 "compare": false, 00:07:42.994 "compare_and_write": false, 00:07:42.994 "abort": true, 00:07:42.994 "seek_hole": false, 00:07:42.994 "seek_data": false, 00:07:42.994 "copy": true, 00:07:42.994 "nvme_iov_md": false 00:07:42.994 }, 00:07:42.994 "memory_domains": [ 00:07:42.994 { 00:07:42.994 "dma_device_id": "system", 00:07:42.994 "dma_device_type": 1 00:07:42.994 }, 00:07:42.994 { 00:07:42.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.994 "dma_device_type": 2 00:07:42.994 } 00:07:42.994 ], 00:07:42.994 "driver_specific": {} 00:07:42.994 } 00:07:42.994 ]' 00:07:42.994 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:43.275 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:43.275 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:43.275 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:43.275 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:43.275 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:43.275 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:43.275 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:43.840 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:43.840 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:43.840 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:43.840 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:43.840 15:44:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:45.734 15:44:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:45.991 15:44:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:46.248 15:44:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:47.181 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:47.181 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:47.181 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:47.181 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.181 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.440 ************************************ 00:07:47.440 START TEST filesystem_in_capsule_ext4 00:07:47.440 ************************************ 00:07:47.440 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:47.440 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:47.440 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:47.440 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:47.440 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:47.440 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:47.440 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:47.440 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:47.440 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:47.440 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:47.440 15:44:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:47.440 mke2fs 1.46.5 (30-Dec-2021) 00:07:47.440 Discarding device blocks: 0/522240 done 00:07:47.440 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:47.440 Filesystem UUID: 724c5479-22bd-4e40-b74e-d63dd1da4acd 00:07:47.440 Superblock backups stored on blocks: 00:07:47.441 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:47.441 00:07:47.441 Allocating group tables: 0/64 done 00:07:47.441 Writing inode tables: 0/64 done 00:07:49.966 Creating journal (8192 blocks): done 00:07:50.532 Writing superblocks and filesystem accounting information: 0/64 done 00:07:50.532 00:07:50.532 15:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:50.532 15:44:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 656232 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:51.465 00:07:51.465 real 0m4.157s 00:07:51.465 user 0m0.013s 00:07:51.465 sys 0m0.064s 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:51.465 ************************************ 00:07:51.465 END TEST filesystem_in_capsule_ext4 00:07:51.465 ************************************ 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.465 ************************************ 00:07:51.465 START TEST filesystem_in_capsule_btrfs 00:07:51.465 ************************************ 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:51.465 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:51.724 btrfs-progs v6.6.2 00:07:51.724 See https://btrfs.readthedocs.io for more information. 00:07:51.724 00:07:51.724 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:51.724 NOTE: several default settings have changed in version 5.15, please make sure 00:07:51.724 this does not affect your deployments: 00:07:51.724 - DUP for metadata (-m dup) 00:07:51.724 - enabled no-holes (-O no-holes) 00:07:51.724 - enabled free-space-tree (-R free-space-tree) 00:07:51.724 00:07:51.724 Label: (null) 00:07:51.724 UUID: 379a968f-df20-49f3-a39e-908abebfdc21 00:07:51.724 Node size: 16384 00:07:51.724 Sector size: 4096 00:07:51.724 Filesystem size: 510.00MiB 00:07:51.724 Block group profiles: 00:07:51.724 Data: single 8.00MiB 00:07:51.724 Metadata: DUP 32.00MiB 00:07:51.724 System: DUP 8.00MiB 00:07:51.724 SSD detected: yes 00:07:51.724 Zoned device: no 00:07:51.724 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:51.724 Runtime features: free-space-tree 00:07:51.724 Checksum: crc32c 00:07:51.724 Number of devices: 1 00:07:51.724 Devices: 00:07:51.724 ID SIZE PATH 00:07:51.724 1 510.00MiB /dev/nvme0n1p1 00:07:51.724 00:07:51.724 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:51.724 15:44:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:52.656 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:52.656 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:52.657 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:52.657 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:52.657 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:52.657 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:52.915 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 656232 00:07:52.915 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:52.915 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:52.915 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:52.915 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:52.915 00:07:52.915 real 0m1.281s 00:07:52.915 user 0m0.020s 00:07:52.915 sys 0m0.114s 00:07:52.915 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.915 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:52.915 ************************************ 00:07:52.915 END TEST filesystem_in_capsule_btrfs 00:07:52.915 ************************************ 00:07:52.915 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:52.915 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:52.915 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:52.915 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.915 15:44:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.915 ************************************ 00:07:52.915 START TEST filesystem_in_capsule_xfs 00:07:52.915 ************************************ 00:07:52.915 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:52.915 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:52.915 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:52.915 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:52.915 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:52.915 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:52.915 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:52.915 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:52.915 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:52.915 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:52.915 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:52.915 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:52.915 = sectsz=512 attr=2, projid32bit=1 00:07:52.915 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:52.915 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:52.915 data = bsize=4096 blocks=130560, imaxpct=25 00:07:52.915 = sunit=0 swidth=0 blks 00:07:52.915 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:52.915 log =internal log bsize=4096 blocks=16384, version=2 00:07:52.915 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:52.915 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:53.848 Discarding blocks...Done. 00:07:53.848 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:53.848 15:44:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:55.746 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:55.746 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:55.746 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 656232 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:55.747 00:07:55.747 real 0m2.648s 00:07:55.747 user 0m0.017s 00:07:55.747 sys 0m0.060s 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:55.747 ************************************ 00:07:55.747 END TEST filesystem_in_capsule_xfs 00:07:55.747 ************************************ 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:55.747 15:44:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:56.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 656232 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 656232 ']' 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 656232 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:56.003 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 656232 00:07:56.004 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:56.004 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:56.004 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 656232' 00:07:56.004 killing process with pid 656232 00:07:56.004 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 656232 00:07:56.004 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 656232 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:56.572 00:07:56.572 real 0m13.899s 00:07:56.572 user 0m53.369s 00:07:56.572 sys 0m1.989s 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.572 ************************************ 00:07:56.572 END TEST nvmf_filesystem_in_capsule 00:07:56.572 ************************************ 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:56.572 rmmod nvme_tcp 00:07:56.572 rmmod nvme_fabrics 00:07:56.572 rmmod nvme_keyring 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:56.572 15:44:53 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.474 15:44:55 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:58.474 00:07:58.475 real 0m29.369s 00:07:58.475 user 1m35.451s 00:07:58.475 sys 0m5.449s 00:07:58.475 15:44:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.475 15:44:55 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:58.475 ************************************ 00:07:58.475 END TEST nvmf_filesystem 00:07:58.475 ************************************ 00:07:58.475 15:44:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:58.475 15:44:55 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:58.475 15:44:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:58.475 15:44:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.475 15:44:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:58.475 ************************************ 00:07:58.475 START TEST nvmf_target_discovery 00:07:58.475 ************************************ 00:07:58.475 15:44:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:58.732 * Looking for test storage... 00:07:58.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:58.732 15:44:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:00.630 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:00.630 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:00.630 Found net devices under 0000:84:00.0: cvl_0_0 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.630 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:00.630 Found net devices under 0000:84:00.1: cvl_0_1 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.631 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.889 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.890 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.890 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:00.890 15:44:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:00.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:08:00.890 00:08:00.890 --- 10.0.0.2 ping statistics --- 00:08:00.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.890 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:08:00.890 00:08:00.890 --- 10.0.0.1 ping statistics --- 00:08:00.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.890 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=660041 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 660041 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 660041 ']' 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.890 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:00.890 [2024-07-12 15:44:58.113010] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:08:00.890 [2024-07-12 15:44:58.113103] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.890 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.890 [2024-07-12 15:44:58.177939] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:01.148 [2024-07-12 15:44:58.291277] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.148 [2024-07-12 15:44:58.291336] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.148 [2024-07-12 15:44:58.291350] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.148 [2024-07-12 15:44:58.291361] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.148 [2024-07-12 15:44:58.291371] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.148 [2024-07-12 15:44:58.291460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.148 [2024-07-12 15:44:58.291521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.148 [2024-07-12 15:44:58.291590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:01.148 [2024-07-12 15:44:58.291593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.148 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:01.148 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:01.148 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:01.148 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:01.148 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 [2024-07-12 15:44:58.451606] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 Null1 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 [2024-07-12 15:44:58.491958] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 Null2 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 Null3 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 Null4 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.406 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.407 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:08:01.665 00:08:01.665 Discovery Log Number of Records 6, Generation counter 6 00:08:01.665 =====Discovery Log Entry 0====== 00:08:01.665 trtype: tcp 00:08:01.665 adrfam: ipv4 00:08:01.665 subtype: current discovery subsystem 00:08:01.665 treq: not required 00:08:01.665 portid: 0 00:08:01.665 trsvcid: 4420 00:08:01.665 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:01.665 traddr: 10.0.0.2 00:08:01.665 eflags: explicit discovery connections, duplicate discovery information 00:08:01.665 sectype: none 00:08:01.665 =====Discovery Log Entry 1====== 00:08:01.665 trtype: tcp 00:08:01.665 adrfam: ipv4 00:08:01.665 subtype: nvme subsystem 00:08:01.665 treq: not required 00:08:01.665 portid: 0 00:08:01.665 trsvcid: 4420 00:08:01.665 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:01.665 traddr: 10.0.0.2 00:08:01.665 eflags: none 00:08:01.665 sectype: none 00:08:01.665 =====Discovery Log Entry 2====== 00:08:01.665 trtype: tcp 00:08:01.665 adrfam: ipv4 00:08:01.665 subtype: nvme subsystem 00:08:01.665 treq: not required 00:08:01.665 portid: 0 00:08:01.665 trsvcid: 4420 00:08:01.665 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:01.665 traddr: 10.0.0.2 00:08:01.665 eflags: none 00:08:01.665 sectype: none 00:08:01.665 =====Discovery Log Entry 3====== 00:08:01.665 trtype: tcp 00:08:01.665 adrfam: ipv4 00:08:01.665 subtype: nvme subsystem 00:08:01.665 treq: not required 00:08:01.665 portid: 0 00:08:01.665 trsvcid: 4420 00:08:01.665 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:01.665 traddr: 10.0.0.2 00:08:01.665 eflags: none 00:08:01.665 sectype: none 00:08:01.665 =====Discovery Log Entry 4====== 00:08:01.665 trtype: tcp 00:08:01.665 adrfam: ipv4 00:08:01.665 subtype: nvme subsystem 00:08:01.665 treq: not required 00:08:01.665 portid: 0 00:08:01.665 trsvcid: 4420 00:08:01.665 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:01.665 traddr: 10.0.0.2 00:08:01.665 eflags: none 00:08:01.665 sectype: none 00:08:01.665 =====Discovery Log Entry 5====== 00:08:01.665 trtype: tcp 00:08:01.665 adrfam: ipv4 00:08:01.665 subtype: discovery subsystem referral 00:08:01.665 treq: not required 00:08:01.665 portid: 0 00:08:01.665 trsvcid: 4430 00:08:01.665 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:01.665 traddr: 10.0.0.2 00:08:01.665 eflags: none 00:08:01.665 sectype: none 00:08:01.665 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:01.665 Perform nvmf subsystem discovery via RPC 00:08:01.665 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:01.665 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.665 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.665 [ 00:08:01.665 { 00:08:01.665 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:01.665 "subtype": "Discovery", 00:08:01.665 "listen_addresses": [ 00:08:01.665 { 00:08:01.665 "trtype": "TCP", 00:08:01.665 "adrfam": "IPv4", 00:08:01.665 "traddr": "10.0.0.2", 00:08:01.665 "trsvcid": "4420" 00:08:01.665 } 00:08:01.665 ], 00:08:01.665 "allow_any_host": true, 00:08:01.665 "hosts": [] 00:08:01.665 }, 00:08:01.665 { 00:08:01.665 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:01.665 "subtype": "NVMe", 00:08:01.665 "listen_addresses": [ 00:08:01.665 { 00:08:01.665 "trtype": "TCP", 00:08:01.665 "adrfam": "IPv4", 00:08:01.665 "traddr": "10.0.0.2", 00:08:01.665 "trsvcid": "4420" 00:08:01.665 } 00:08:01.665 ], 00:08:01.665 "allow_any_host": true, 00:08:01.665 "hosts": [], 00:08:01.665 "serial_number": "SPDK00000000000001", 00:08:01.665 "model_number": "SPDK bdev Controller", 00:08:01.665 "max_namespaces": 32, 00:08:01.665 "min_cntlid": 1, 00:08:01.665 "max_cntlid": 65519, 00:08:01.665 "namespaces": [ 00:08:01.665 { 00:08:01.665 "nsid": 1, 00:08:01.665 "bdev_name": "Null1", 00:08:01.665 "name": "Null1", 00:08:01.665 "nguid": "7E570999926B4668A54EA5FCE3A9A0AB", 00:08:01.665 "uuid": "7e570999-926b-4668-a54e-a5fce3a9a0ab" 00:08:01.665 } 00:08:01.665 ] 00:08:01.665 }, 00:08:01.665 { 00:08:01.665 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:01.665 "subtype": "NVMe", 00:08:01.665 "listen_addresses": [ 00:08:01.665 { 00:08:01.665 "trtype": "TCP", 00:08:01.665 "adrfam": "IPv4", 00:08:01.665 "traddr": "10.0.0.2", 00:08:01.665 "trsvcid": "4420" 00:08:01.665 } 00:08:01.665 ], 00:08:01.665 "allow_any_host": true, 00:08:01.665 "hosts": [], 00:08:01.665 "serial_number": "SPDK00000000000002", 00:08:01.665 "model_number": "SPDK bdev Controller", 00:08:01.665 "max_namespaces": 32, 00:08:01.665 "min_cntlid": 1, 00:08:01.666 "max_cntlid": 65519, 00:08:01.666 "namespaces": [ 00:08:01.666 { 00:08:01.666 "nsid": 1, 00:08:01.666 "bdev_name": "Null2", 00:08:01.666 "name": "Null2", 00:08:01.666 "nguid": "455809A8D7454F79B7EE6EAE1CC28119", 00:08:01.666 "uuid": "455809a8-d745-4f79-b7ee-6eae1cc28119" 00:08:01.666 } 00:08:01.666 ] 00:08:01.666 }, 00:08:01.666 { 00:08:01.666 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:01.666 "subtype": "NVMe", 00:08:01.666 "listen_addresses": [ 00:08:01.666 { 00:08:01.666 "trtype": "TCP", 00:08:01.666 "adrfam": "IPv4", 00:08:01.666 "traddr": "10.0.0.2", 00:08:01.666 "trsvcid": "4420" 00:08:01.666 } 00:08:01.666 ], 00:08:01.666 "allow_any_host": true, 00:08:01.666 "hosts": [], 00:08:01.666 "serial_number": "SPDK00000000000003", 00:08:01.666 "model_number": "SPDK bdev Controller", 00:08:01.666 "max_namespaces": 32, 00:08:01.666 "min_cntlid": 1, 00:08:01.666 "max_cntlid": 65519, 00:08:01.666 "namespaces": [ 00:08:01.666 { 00:08:01.666 "nsid": 1, 00:08:01.666 "bdev_name": "Null3", 00:08:01.666 "name": "Null3", 00:08:01.666 "nguid": "3033B575F2214AF4BC39705EF514F22A", 00:08:01.666 "uuid": "3033b575-f221-4af4-bc39-705ef514f22a" 00:08:01.666 } 00:08:01.666 ] 00:08:01.666 }, 00:08:01.666 { 00:08:01.666 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:01.666 "subtype": "NVMe", 00:08:01.666 "listen_addresses": [ 00:08:01.666 { 00:08:01.666 "trtype": "TCP", 00:08:01.666 "adrfam": "IPv4", 00:08:01.666 "traddr": "10.0.0.2", 00:08:01.666 "trsvcid": "4420" 00:08:01.666 } 00:08:01.666 ], 00:08:01.666 "allow_any_host": true, 00:08:01.666 "hosts": [], 00:08:01.666 "serial_number": "SPDK00000000000004", 00:08:01.666 "model_number": "SPDK bdev Controller", 00:08:01.666 "max_namespaces": 32, 00:08:01.666 "min_cntlid": 1, 00:08:01.666 "max_cntlid": 65519, 00:08:01.666 "namespaces": [ 00:08:01.666 { 00:08:01.666 "nsid": 1, 00:08:01.666 "bdev_name": "Null4", 00:08:01.666 "name": "Null4", 00:08:01.666 "nguid": "3E4962B3C0114D078AACC53DCAEA00D5", 00:08:01.666 "uuid": "3e4962b3-c011-4d07-8aac-c53dcaea00d5" 00:08:01.666 } 00:08:01.666 ] 00:08:01.666 } 00:08:01.666 ] 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:01.666 rmmod nvme_tcp 00:08:01.666 rmmod nvme_fabrics 00:08:01.666 rmmod nvme_keyring 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:01.666 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 660041 ']' 00:08:01.667 15:44:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 660041 00:08:01.667 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 660041 ']' 00:08:01.667 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 660041 00:08:01.667 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:01.667 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:01.667 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 660041 00:08:01.667 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:01.667 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:01.667 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 660041' 00:08:01.667 killing process with pid 660041 00:08:01.667 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 660041 00:08:01.667 15:44:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 660041 00:08:01.925 15:44:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:01.925 15:44:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:01.925 15:44:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:01.925 15:44:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:01.925 15:44:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:01.925 15:44:59 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.925 15:44:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.925 15:44:59 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.470 15:45:01 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:04.470 00:08:04.470 real 0m5.495s 00:08:04.470 user 0m4.351s 00:08:04.470 sys 0m1.880s 00:08:04.470 15:45:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:04.470 15:45:01 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:04.470 ************************************ 00:08:04.470 END TEST nvmf_target_discovery 00:08:04.470 ************************************ 00:08:04.470 15:45:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:04.470 15:45:01 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:04.470 15:45:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:04.470 15:45:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:04.470 15:45:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:04.470 ************************************ 00:08:04.470 START TEST nvmf_referrals 00:08:04.470 ************************************ 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:04.470 * Looking for test storage... 00:08:04.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:04.470 15:45:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:04.471 15:45:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:06.389 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:06.389 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:06.389 Found net devices under 0000:84:00.0: cvl_0_0 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:06.389 Found net devices under 0000:84:00.1: cvl_0_1 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:06.389 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:06.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:06.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:08:06.390 00:08:06.390 --- 10.0.0.2 ping statistics --- 00:08:06.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.390 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:06.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:06.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:08:06.390 00:08:06.390 --- 10.0.0.1 ping statistics --- 00:08:06.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:06.390 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=662213 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 662213 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 662213 ']' 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.390 15:45:03 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.390 [2024-07-12 15:45:03.680826] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:08:06.390 [2024-07-12 15:45:03.680904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.648 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.648 [2024-07-12 15:45:03.772048] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:06.648 [2024-07-12 15:45:03.914842] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.648 [2024-07-12 15:45:03.914903] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.648 [2024-07-12 15:45:03.914929] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.648 [2024-07-12 15:45:03.914954] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.648 [2024-07-12 15:45:03.914974] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.648 [2024-07-12 15:45:03.915050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:06.648 [2024-07-12 15:45:03.915109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:06.648 [2024-07-12 15:45:03.915178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:06.648 [2024-07-12 15:45:03.915187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.906 [2024-07-12 15:45:04.063419] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.906 [2024-07-12 15:45:04.075619] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.906 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:06.907 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.165 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.421 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:07.421 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:07.421 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:07.421 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:07.421 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.421 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:07.421 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:07.422 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.679 15:45:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:07.936 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:08.193 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:08.450 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:08.707 rmmod nvme_tcp 00:08:08.707 rmmod nvme_fabrics 00:08:08.707 rmmod nvme_keyring 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 662213 ']' 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 662213 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 662213 ']' 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 662213 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 662213 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 662213' 00:08:08.707 killing process with pid 662213 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 662213 00:08:08.707 15:45:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 662213 00:08:08.967 15:45:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:08.967 15:45:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:08.967 15:45:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:08.967 15:45:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:08.967 15:45:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:08.967 15:45:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:08.967 15:45:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:08.967 15:45:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.885 15:45:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:10.885 00:08:10.885 real 0m6.859s 00:08:10.885 user 0m10.195s 00:08:10.885 sys 0m2.234s 00:08:10.885 15:45:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.885 15:45:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:10.885 ************************************ 00:08:10.885 END TEST nvmf_referrals 00:08:10.885 ************************************ 00:08:11.151 15:45:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:11.151 15:45:08 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:11.151 15:45:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:11.151 15:45:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.151 15:45:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.151 ************************************ 00:08:11.151 START TEST nvmf_connect_disconnect 00:08:11.151 ************************************ 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:11.151 * Looking for test storage... 00:08:11.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.151 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:11.152 15:45:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:13.679 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:13.679 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:13.679 Found net devices under 0000:84:00.0: cvl_0_0 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:13.679 Found net devices under 0000:84:00.1: cvl_0_1 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:13.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:08:13.679 00:08:13.679 --- 10.0.0.2 ping statistics --- 00:08:13.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.679 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:08:13.679 00:08:13.679 --- 10.0.0.1 ping statistics --- 00:08:13.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.679 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:13.679 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=665103 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 665103 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 665103 ']' 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.680 [2024-07-12 15:45:10.586426] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:08:13.680 [2024-07-12 15:45:10.586511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.680 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.680 [2024-07-12 15:45:10.652731] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.680 [2024-07-12 15:45:10.766057] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.680 [2024-07-12 15:45:10.766136] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.680 [2024-07-12 15:45:10.766150] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.680 [2024-07-12 15:45:10.766161] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.680 [2024-07-12 15:45:10.766170] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.680 [2024-07-12 15:45:10.766260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.680 [2024-07-12 15:45:10.766337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.680 [2024-07-12 15:45:10.766356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.680 [2024-07-12 15:45:10.766361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.680 [2024-07-12 15:45:10.929697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.680 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.940 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.940 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:13.940 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.940 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.940 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.940 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.940 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.940 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:13.940 [2024-07-12 15:45:10.991976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.940 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.940 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:13.940 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:13.940 15:45:10 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:16.465 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.064 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:28.064 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:28.064 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:28.064 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:28.064 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:28.064 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:28.064 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:28.065 rmmod nvme_tcp 00:08:28.065 rmmod nvme_fabrics 00:08:28.065 rmmod nvme_keyring 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 665103 ']' 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 665103 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 665103 ']' 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 665103 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 665103 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 665103' 00:08:28.065 killing process with pid 665103 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 665103 00:08:28.065 15:45:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 665103 00:08:28.065 15:45:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:28.065 15:45:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:28.065 15:45:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:28.065 15:45:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.065 15:45:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:28.065 15:45:25 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.065 15:45:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.065 15:45:25 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.602 15:45:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:30.602 00:08:30.602 real 0m19.077s 00:08:30.602 user 0m57.167s 00:08:30.602 sys 0m3.440s 00:08:30.602 15:45:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.602 15:45:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.602 ************************************ 00:08:30.602 END TEST nvmf_connect_disconnect 00:08:30.602 ************************************ 00:08:30.602 15:45:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:30.602 15:45:27 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:30.602 15:45:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:30.602 15:45:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.602 15:45:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:30.602 ************************************ 00:08:30.602 START TEST nvmf_multitarget 00:08:30.602 ************************************ 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:30.602 * Looking for test storage... 00:08:30.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:30.602 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.603 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:30.603 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:30.603 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:30.603 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.603 15:45:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.603 15:45:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.603 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:30.603 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:30.603 15:45:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:30.603 15:45:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:32.505 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:32.505 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:32.505 Found net devices under 0000:84:00.0: cvl_0_0 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:32.505 Found net devices under 0000:84:00.1: cvl_0_1 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:32.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:08:32.505 00:08:32.505 --- 10.0.0.2 ping statistics --- 00:08:32.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.505 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:08:32.505 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:32.506 00:08:32.506 --- 10.0.0.1 ping statistics --- 00:08:32.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.506 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=668818 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 668818 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 668818 ']' 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.506 15:45:29 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:32.506 [2024-07-12 15:45:29.783785] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:08:32.506 [2024-07-12 15:45:29.783873] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.764 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.764 [2024-07-12 15:45:29.851543] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.764 [2024-07-12 15:45:29.964602] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.764 [2024-07-12 15:45:29.964656] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.764 [2024-07-12 15:45:29.964671] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.764 [2024-07-12 15:45:29.964683] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.764 [2024-07-12 15:45:29.964693] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.764 [2024-07-12 15:45:29.964747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.764 [2024-07-12 15:45:29.964797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.764 [2024-07-12 15:45:29.964861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.764 [2024-07-12 15:45:29.964865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.022 15:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.022 15:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:08:33.022 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:33.022 15:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:33.022 15:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:33.022 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.022 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:33.022 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:33.022 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:33.022 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:33.022 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:33.280 "nvmf_tgt_1" 00:08:33.280 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:33.280 "nvmf_tgt_2" 00:08:33.280 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:33.280 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:33.538 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:33.538 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:33.538 true 00:08:33.538 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:33.538 true 00:08:33.538 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:33.538 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.796 rmmod nvme_tcp 00:08:33.796 rmmod nvme_fabrics 00:08:33.796 rmmod nvme_keyring 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 668818 ']' 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 668818 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 668818 ']' 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 668818 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:33.796 15:45:30 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 668818 00:08:33.796 15:45:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:33.796 15:45:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:33.796 15:45:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 668818' 00:08:33.796 killing process with pid 668818 00:08:33.796 15:45:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 668818 00:08:33.796 15:45:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 668818 00:08:34.055 15:45:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:34.055 15:45:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:34.055 15:45:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:34.055 15:45:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.055 15:45:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:34.055 15:45:31 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.055 15:45:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:34.055 15:45:31 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.593 15:45:33 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:36.593 00:08:36.593 real 0m5.976s 00:08:36.593 user 0m6.554s 00:08:36.593 sys 0m2.074s 00:08:36.593 15:45:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.593 15:45:33 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:36.593 ************************************ 00:08:36.593 END TEST nvmf_multitarget 00:08:36.593 ************************************ 00:08:36.593 15:45:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:36.593 15:45:33 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:36.593 15:45:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:36.593 15:45:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.593 15:45:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:36.593 ************************************ 00:08:36.593 START TEST nvmf_rpc 00:08:36.593 ************************************ 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:36.593 * Looking for test storage... 00:08:36.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:36.593 15:45:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:38.494 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:38.494 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.494 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:38.495 Found net devices under 0000:84:00.0: cvl_0_0 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:38.495 Found net devices under 0000:84:00.1: cvl_0_1 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:38.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:08:38.495 00:08:38.495 --- 10.0.0.2 ping statistics --- 00:08:38.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.495 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:08:38.495 00:08:38.495 --- 10.0.0.1 ping statistics --- 00:08:38.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.495 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=670994 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 670994 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 670994 ']' 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:38.495 15:45:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.495 [2024-07-12 15:45:35.702242] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:08:38.495 [2024-07-12 15:45:35.702323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.495 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.495 [2024-07-12 15:45:35.766015] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.753 [2024-07-12 15:45:35.876367] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.753 [2024-07-12 15:45:35.876417] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.753 [2024-07-12 15:45:35.876445] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.753 [2024-07-12 15:45:35.876457] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.753 [2024-07-12 15:45:35.876466] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.753 [2024-07-12 15:45:35.876523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.753 [2024-07-12 15:45:35.876584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.753 [2024-07-12 15:45:35.876647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.753 [2024-07-12 15:45:35.876650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.753 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.753 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:38.753 15:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:38.753 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:38.753 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:38.753 15:45:36 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.753 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:38.753 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.753 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:39.011 "tick_rate": 2700000000, 00:08:39.011 "poll_groups": [ 00:08:39.011 { 00:08:39.011 "name": "nvmf_tgt_poll_group_000", 00:08:39.011 "admin_qpairs": 0, 00:08:39.011 "io_qpairs": 0, 00:08:39.011 "current_admin_qpairs": 0, 00:08:39.011 "current_io_qpairs": 0, 00:08:39.011 "pending_bdev_io": 0, 00:08:39.011 "completed_nvme_io": 0, 00:08:39.011 "transports": [] 00:08:39.011 }, 00:08:39.011 { 00:08:39.011 "name": "nvmf_tgt_poll_group_001", 00:08:39.011 "admin_qpairs": 0, 00:08:39.011 "io_qpairs": 0, 00:08:39.011 "current_admin_qpairs": 0, 00:08:39.011 "current_io_qpairs": 0, 00:08:39.011 "pending_bdev_io": 0, 00:08:39.011 "completed_nvme_io": 0, 00:08:39.011 "transports": [] 00:08:39.011 }, 00:08:39.011 { 00:08:39.011 "name": "nvmf_tgt_poll_group_002", 00:08:39.011 "admin_qpairs": 0, 00:08:39.011 "io_qpairs": 0, 00:08:39.011 "current_admin_qpairs": 0, 00:08:39.011 "current_io_qpairs": 0, 00:08:39.011 "pending_bdev_io": 0, 00:08:39.011 "completed_nvme_io": 0, 00:08:39.011 "transports": [] 00:08:39.011 }, 00:08:39.011 { 00:08:39.011 "name": "nvmf_tgt_poll_group_003", 00:08:39.011 "admin_qpairs": 0, 00:08:39.011 "io_qpairs": 0, 00:08:39.011 "current_admin_qpairs": 0, 00:08:39.011 "current_io_qpairs": 0, 00:08:39.011 "pending_bdev_io": 0, 00:08:39.011 "completed_nvme_io": 0, 00:08:39.011 "transports": [] 00:08:39.011 } 00:08:39.011 ] 00:08:39.011 }' 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.011 [2024-07-12 15:45:36.139058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.011 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:39.011 "tick_rate": 2700000000, 00:08:39.011 "poll_groups": [ 00:08:39.011 { 00:08:39.011 "name": "nvmf_tgt_poll_group_000", 00:08:39.011 "admin_qpairs": 0, 00:08:39.011 "io_qpairs": 0, 00:08:39.011 "current_admin_qpairs": 0, 00:08:39.011 "current_io_qpairs": 0, 00:08:39.011 "pending_bdev_io": 0, 00:08:39.011 "completed_nvme_io": 0, 00:08:39.011 "transports": [ 00:08:39.011 { 00:08:39.011 "trtype": "TCP" 00:08:39.011 } 00:08:39.011 ] 00:08:39.011 }, 00:08:39.011 { 00:08:39.011 "name": "nvmf_tgt_poll_group_001", 00:08:39.011 "admin_qpairs": 0, 00:08:39.011 "io_qpairs": 0, 00:08:39.011 "current_admin_qpairs": 0, 00:08:39.011 "current_io_qpairs": 0, 00:08:39.011 "pending_bdev_io": 0, 00:08:39.011 "completed_nvme_io": 0, 00:08:39.011 "transports": [ 00:08:39.011 { 00:08:39.011 "trtype": "TCP" 00:08:39.011 } 00:08:39.011 ] 00:08:39.011 }, 00:08:39.011 { 00:08:39.011 "name": "nvmf_tgt_poll_group_002", 00:08:39.011 "admin_qpairs": 0, 00:08:39.011 "io_qpairs": 0, 00:08:39.011 "current_admin_qpairs": 0, 00:08:39.011 "current_io_qpairs": 0, 00:08:39.011 "pending_bdev_io": 0, 00:08:39.011 "completed_nvme_io": 0, 00:08:39.011 "transports": [ 00:08:39.012 { 00:08:39.012 "trtype": "TCP" 00:08:39.012 } 00:08:39.012 ] 00:08:39.012 }, 00:08:39.012 { 00:08:39.012 "name": "nvmf_tgt_poll_group_003", 00:08:39.012 "admin_qpairs": 0, 00:08:39.012 "io_qpairs": 0, 00:08:39.012 "current_admin_qpairs": 0, 00:08:39.012 "current_io_qpairs": 0, 00:08:39.012 "pending_bdev_io": 0, 00:08:39.012 "completed_nvme_io": 0, 00:08:39.012 "transports": [ 00:08:39.012 { 00:08:39.012 "trtype": "TCP" 00:08:39.012 } 00:08:39.012 ] 00:08:39.012 } 00:08:39.012 ] 00:08:39.012 }' 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.012 Malloc1 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.012 [2024-07-12 15:45:36.296691] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.012 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:39.269 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.269 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:39.269 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:39.269 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:39.269 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:39.269 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:08:39.269 [2024-07-12 15:45:36.319212] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:08:39.269 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:39.270 could not add new controller: failed to write to nvme-fabrics device 00:08:39.270 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:39.270 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:39.270 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:39.270 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:39.270 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:39.270 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.270 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:39.270 15:45:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.270 15:45:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:39.836 15:45:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:39.836 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:39.836 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:39.836 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:39.836 15:45:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:42.362 [2024-07-12 15:45:39.170986] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:08:42.362 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:42.362 could not add new controller: failed to write to nvme-fabrics device 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:42.362 15:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:42.619 15:45:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:08:42.619 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:42.619 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:42.619 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:42.619 15:45:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:44.512 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:44.512 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:44.512 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:44.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:08:44.771 15:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.772 [2024-07-12 15:45:41.947706] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.772 15:45:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:45.400 15:45:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:45.400 15:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:45.400 15:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:45.400 15:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:45.400 15:45:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:47.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.925 [2024-07-12 15:45:44.764502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:47.925 15:45:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:48.182 15:45:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:48.182 15:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:48.182 15:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:48.182 15:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:48.182 15:45:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:50.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:50.712 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.713 [2024-07-12 15:45:47.573013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:50.713 15:45:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:50.971 15:45:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:50.971 15:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:50.971 15:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:50.971 15:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:50.971 15:45:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:53.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 [2024-07-12 15:45:50.383999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:53.501 15:45:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:54.067 15:45:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:54.067 15:45:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:54.067 15:45:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:54.067 15:45:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:54.067 15:45:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:55.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.974 [2024-07-12 15:45:53.200451] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.974 15:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:56.542 15:45:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:08:56.542 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:08:56.542 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.542 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:56.542 15:45:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:59.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 [2024-07-12 15:45:55.984533] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.085 15:45:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 [2024-07-12 15:45:56.032604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.085 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 [2024-07-12 15:45:56.080809] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 [2024-07-12 15:45:56.128977] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 [2024-07-12 15:45:56.177179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:08:59.086 "tick_rate": 2700000000, 00:08:59.086 "poll_groups": [ 00:08:59.086 { 00:08:59.086 "name": "nvmf_tgt_poll_group_000", 00:08:59.086 "admin_qpairs": 2, 00:08:59.086 "io_qpairs": 84, 00:08:59.086 "current_admin_qpairs": 0, 00:08:59.086 "current_io_qpairs": 0, 00:08:59.086 "pending_bdev_io": 0, 00:08:59.086 "completed_nvme_io": 136, 00:08:59.086 "transports": [ 00:08:59.086 { 00:08:59.086 "trtype": "TCP" 00:08:59.086 } 00:08:59.086 ] 00:08:59.086 }, 00:08:59.086 { 00:08:59.086 "name": "nvmf_tgt_poll_group_001", 00:08:59.086 "admin_qpairs": 2, 00:08:59.086 "io_qpairs": 84, 00:08:59.086 "current_admin_qpairs": 0, 00:08:59.086 "current_io_qpairs": 0, 00:08:59.086 "pending_bdev_io": 0, 00:08:59.086 "completed_nvme_io": 182, 00:08:59.086 "transports": [ 00:08:59.086 { 00:08:59.086 "trtype": "TCP" 00:08:59.086 } 00:08:59.086 ] 00:08:59.086 }, 00:08:59.086 { 00:08:59.086 "name": "nvmf_tgt_poll_group_002", 00:08:59.086 "admin_qpairs": 1, 00:08:59.086 "io_qpairs": 84, 00:08:59.086 "current_admin_qpairs": 0, 00:08:59.086 "current_io_qpairs": 0, 00:08:59.086 "pending_bdev_io": 0, 00:08:59.086 "completed_nvme_io": 185, 00:08:59.086 "transports": [ 00:08:59.086 { 00:08:59.086 "trtype": "TCP" 00:08:59.086 } 00:08:59.086 ] 00:08:59.086 }, 00:08:59.086 { 00:08:59.086 "name": "nvmf_tgt_poll_group_003", 00:08:59.086 "admin_qpairs": 2, 00:08:59.086 "io_qpairs": 84, 00:08:59.086 "current_admin_qpairs": 0, 00:08:59.086 "current_io_qpairs": 0, 00:08:59.086 "pending_bdev_io": 0, 00:08:59.086 "completed_nvme_io": 183, 00:08:59.086 "transports": [ 00:08:59.086 { 00:08:59.086 "trtype": "TCP" 00:08:59.086 } 00:08:59.086 ] 00:08:59.086 } 00:08:59.086 ] 00:08:59.086 }' 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:59.086 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.087 rmmod nvme_tcp 00:08:59.087 rmmod nvme_fabrics 00:08:59.087 rmmod nvme_keyring 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 670994 ']' 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 670994 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 670994 ']' 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 670994 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 670994 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 670994' 00:08:59.087 killing process with pid 670994 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 670994 00:08:59.087 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 670994 00:08:59.655 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.656 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:59.656 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:59.656 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.656 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:59.656 15:45:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.656 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.656 15:45:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.566 15:45:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:01.566 00:09:01.566 real 0m25.352s 00:09:01.566 user 1m22.009s 00:09:01.566 sys 0m4.402s 00:09:01.566 15:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.566 15:45:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.566 ************************************ 00:09:01.566 END TEST nvmf_rpc 00:09:01.566 ************************************ 00:09:01.566 15:45:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:01.566 15:45:58 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:01.566 15:45:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:01.566 15:45:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.566 15:45:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:01.566 ************************************ 00:09:01.566 START TEST nvmf_invalid 00:09:01.566 ************************************ 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:01.566 * Looking for test storage... 00:09:01.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.566 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.567 15:45:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.101 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:04.102 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:04.102 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:04.102 Found net devices under 0000:84:00.0: cvl_0_0 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:04.102 Found net devices under 0000:84:00.1: cvl_0_1 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.102 15:46:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:04.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:09:04.102 00:09:04.102 --- 10.0.0.2 ping statistics --- 00:09:04.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.102 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:09:04.102 00:09:04.102 --- 10.0.0.1 ping statistics --- 00:09:04.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.102 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=675560 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 675560 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 675560 ']' 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.102 15:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:04.102 [2024-07-12 15:46:01.175276] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:09:04.102 [2024-07-12 15:46:01.175353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.103 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.103 [2024-07-12 15:46:01.243511] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.103 [2024-07-12 15:46:01.364564] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.103 [2024-07-12 15:46:01.364619] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.103 [2024-07-12 15:46:01.364633] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.103 [2024-07-12 15:46:01.364644] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.103 [2024-07-12 15:46:01.364654] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.103 [2024-07-12 15:46:01.364742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.103 [2024-07-12 15:46:01.364806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.103 [2024-07-12 15:46:01.364829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.103 [2024-07-12 15:46:01.364832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.361 15:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.361 15:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:09:04.361 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:04.361 15:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.361 15:46:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:04.361 15:46:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.361 15:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:04.361 15:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3810 00:09:04.619 [2024-07-12 15:46:01.799410] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:04.619 15:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:04.619 { 00:09:04.619 "nqn": "nqn.2016-06.io.spdk:cnode3810", 00:09:04.619 "tgt_name": "foobar", 00:09:04.619 "method": "nvmf_create_subsystem", 00:09:04.619 "req_id": 1 00:09:04.619 } 00:09:04.619 Got JSON-RPC error response 00:09:04.619 response: 00:09:04.619 { 00:09:04.619 "code": -32603, 00:09:04.619 "message": "Unable to find target foobar" 00:09:04.619 }' 00:09:04.619 15:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:04.619 { 00:09:04.619 "nqn": "nqn.2016-06.io.spdk:cnode3810", 00:09:04.619 "tgt_name": "foobar", 00:09:04.619 "method": "nvmf_create_subsystem", 00:09:04.619 "req_id": 1 00:09:04.619 } 00:09:04.619 Got JSON-RPC error response 00:09:04.619 response: 00:09:04.619 { 00:09:04.619 "code": -32603, 00:09:04.619 "message": "Unable to find target foobar" 00:09:04.619 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:04.619 15:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:04.619 15:46:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode21248 00:09:04.876 [2024-07-12 15:46:02.096430] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21248: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:04.876 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:04.876 { 00:09:04.876 "nqn": "nqn.2016-06.io.spdk:cnode21248", 00:09:04.876 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:04.876 "method": "nvmf_create_subsystem", 00:09:04.876 "req_id": 1 00:09:04.876 } 00:09:04.876 Got JSON-RPC error response 00:09:04.876 response: 00:09:04.876 { 00:09:04.876 "code": -32602, 00:09:04.876 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:04.876 }' 00:09:04.876 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:04.876 { 00:09:04.876 "nqn": "nqn.2016-06.io.spdk:cnode21248", 00:09:04.876 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:04.876 "method": "nvmf_create_subsystem", 00:09:04.876 "req_id": 1 00:09:04.876 } 00:09:04.876 Got JSON-RPC error response 00:09:04.876 response: 00:09:04.876 { 00:09:04.876 "code": -32602, 00:09:04.877 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:04.877 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:04.877 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:04.877 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode8683 00:09:05.136 [2024-07-12 15:46:02.361296] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8683: invalid model number 'SPDK_Controller' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:05.136 { 00:09:05.136 "nqn": "nqn.2016-06.io.spdk:cnode8683", 00:09:05.136 "model_number": "SPDK_Controller\u001f", 00:09:05.136 "method": "nvmf_create_subsystem", 00:09:05.136 "req_id": 1 00:09:05.136 } 00:09:05.136 Got JSON-RPC error response 00:09:05.136 response: 00:09:05.136 { 00:09:05.136 "code": -32602, 00:09:05.136 "message": "Invalid MN SPDK_Controller\u001f" 00:09:05.136 }' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:05.136 { 00:09:05.136 "nqn": "nqn.2016-06.io.spdk:cnode8683", 00:09:05.136 "model_number": "SPDK_Controller\u001f", 00:09:05.136 "method": "nvmf_create_subsystem", 00:09:05.136 "req_id": 1 00:09:05.136 } 00:09:05.136 Got JSON-RPC error response 00:09:05.136 response: 00:09:05.136 { 00:09:05.136 "code": -32602, 00:09:05.136 "message": "Invalid MN SPDK_Controller\u001f" 00:09:05.136 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:05.136 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.137 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.137 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:05.137 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:05.137 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:05.137 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.137 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '*gu!T7!~FC6AMq#wxTp)U' 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '*gu!T7!~FC6AMq#wxTp)U' nqn.2016-06.io.spdk:cnode13128 00:09:05.395 [2024-07-12 15:46:02.662325] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13128: invalid serial number '*gu!T7!~FC6AMq#wxTp)U' 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:05.395 { 00:09:05.395 "nqn": "nqn.2016-06.io.spdk:cnode13128", 00:09:05.395 "serial_number": "*gu!T7!~FC6AMq#wxTp)U", 00:09:05.395 "method": "nvmf_create_subsystem", 00:09:05.395 "req_id": 1 00:09:05.395 } 00:09:05.395 Got JSON-RPC error response 00:09:05.395 response: 00:09:05.395 { 00:09:05.395 "code": -32602, 00:09:05.395 "message": "Invalid SN *gu!T7!~FC6AMq#wxTp)U" 00:09:05.395 }' 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:05.395 { 00:09:05.395 "nqn": "nqn.2016-06.io.spdk:cnode13128", 00:09:05.395 "serial_number": "*gu!T7!~FC6AMq#wxTp)U", 00:09:05.395 "method": "nvmf_create_subsystem", 00:09:05.395 "req_id": 1 00:09:05.395 } 00:09:05.395 Got JSON-RPC error response 00:09:05.395 response: 00:09:05.395 { 00:09:05.395 "code": -32602, 00:09:05.395 "message": "Invalid SN *gu!T7!~FC6AMq#wxTp)U" 00:09:05.395 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:05.395 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:09:05.654 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:09:05.655 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ k == \- ]] 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'kQz(u:bBoaYXUR&6J@yx[P[tGt*i(xN]W:v1~.Ve\' 00:09:05.656 15:46:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'kQz(u:bBoaYXUR&6J@yx[P[tGt*i(xN]W:v1~.Ve\' nqn.2016-06.io.spdk:cnode275 00:09:05.915 [2024-07-12 15:46:03.067601] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode275: invalid model number 'kQz(u:bBoaYXUR&6J@yx[P[tGt*i(xN]W:v1~.Ve\' 00:09:05.915 15:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:05.915 { 00:09:05.915 "nqn": "nqn.2016-06.io.spdk:cnode275", 00:09:05.915 "model_number": "kQz(u:bBoaYXUR&6J@yx[P[tGt*i(xN]W:v1~.Ve\\", 00:09:05.915 "method": "nvmf_create_subsystem", 00:09:05.915 "req_id": 1 00:09:05.915 } 00:09:05.915 Got JSON-RPC error response 00:09:05.915 response: 00:09:05.915 { 00:09:05.915 "code": -32602, 00:09:05.915 "message": "Invalid MN kQz(u:bBoaYXUR&6J@yx[P[tGt*i(xN]W:v1~.Ve\\" 00:09:05.915 }' 00:09:05.915 15:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:05.915 { 00:09:05.915 "nqn": "nqn.2016-06.io.spdk:cnode275", 00:09:05.915 "model_number": "kQz(u:bBoaYXUR&6J@yx[P[tGt*i(xN]W:v1~.Ve\\", 00:09:05.915 "method": "nvmf_create_subsystem", 00:09:05.915 "req_id": 1 00:09:05.915 } 00:09:05.915 Got JSON-RPC error response 00:09:05.915 response: 00:09:05.915 { 00:09:05.915 "code": -32602, 00:09:05.915 "message": "Invalid MN kQz(u:bBoaYXUR&6J@yx[P[tGt*i(xN]W:v1~.Ve\\" 00:09:05.915 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:05.915 15:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:06.172 [2024-07-12 15:46:03.364656] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.172 15:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:06.429 15:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:06.429 15:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:06.429 15:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:06.429 15:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:06.429 15:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:06.687 [2024-07-12 15:46:03.862307] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:06.687 15:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:06.687 { 00:09:06.687 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:06.687 "listen_address": { 00:09:06.687 "trtype": "tcp", 00:09:06.687 "traddr": "", 00:09:06.687 "trsvcid": "4421" 00:09:06.687 }, 00:09:06.687 "method": "nvmf_subsystem_remove_listener", 00:09:06.687 "req_id": 1 00:09:06.687 } 00:09:06.687 Got JSON-RPC error response 00:09:06.687 response: 00:09:06.687 { 00:09:06.687 "code": -32602, 00:09:06.687 "message": "Invalid parameters" 00:09:06.687 }' 00:09:06.687 15:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:06.687 { 00:09:06.687 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:06.687 "listen_address": { 00:09:06.687 "trtype": "tcp", 00:09:06.687 "traddr": "", 00:09:06.687 "trsvcid": "4421" 00:09:06.687 }, 00:09:06.687 "method": "nvmf_subsystem_remove_listener", 00:09:06.687 "req_id": 1 00:09:06.687 } 00:09:06.687 Got JSON-RPC error response 00:09:06.687 response: 00:09:06.687 { 00:09:06.687 "code": -32602, 00:09:06.687 "message": "Invalid parameters" 00:09:06.687 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:06.687 15:46:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6003 -i 0 00:09:06.944 [2024-07-12 15:46:04.107085] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6003: invalid cntlid range [0-65519] 00:09:06.944 15:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:06.944 { 00:09:06.944 "nqn": "nqn.2016-06.io.spdk:cnode6003", 00:09:06.944 "min_cntlid": 0, 00:09:06.944 "method": "nvmf_create_subsystem", 00:09:06.944 "req_id": 1 00:09:06.944 } 00:09:06.944 Got JSON-RPC error response 00:09:06.944 response: 00:09:06.944 { 00:09:06.944 "code": -32602, 00:09:06.944 "message": "Invalid cntlid range [0-65519]" 00:09:06.944 }' 00:09:06.944 15:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:06.944 { 00:09:06.944 "nqn": "nqn.2016-06.io.spdk:cnode6003", 00:09:06.944 "min_cntlid": 0, 00:09:06.944 "method": "nvmf_create_subsystem", 00:09:06.944 "req_id": 1 00:09:06.944 } 00:09:06.944 Got JSON-RPC error response 00:09:06.944 response: 00:09:06.944 { 00:09:06.944 "code": -32602, 00:09:06.944 "message": "Invalid cntlid range [0-65519]" 00:09:06.944 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:06.944 15:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25993 -i 65520 00:09:07.202 [2024-07-12 15:46:04.355891] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25993: invalid cntlid range [65520-65519] 00:09:07.202 15:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:07.202 { 00:09:07.202 "nqn": "nqn.2016-06.io.spdk:cnode25993", 00:09:07.202 "min_cntlid": 65520, 00:09:07.202 "method": "nvmf_create_subsystem", 00:09:07.202 "req_id": 1 00:09:07.202 } 00:09:07.202 Got JSON-RPC error response 00:09:07.202 response: 00:09:07.202 { 00:09:07.202 "code": -32602, 00:09:07.202 "message": "Invalid cntlid range [65520-65519]" 00:09:07.202 }' 00:09:07.202 15:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:07.202 { 00:09:07.203 "nqn": "nqn.2016-06.io.spdk:cnode25993", 00:09:07.203 "min_cntlid": 65520, 00:09:07.203 "method": "nvmf_create_subsystem", 00:09:07.203 "req_id": 1 00:09:07.203 } 00:09:07.203 Got JSON-RPC error response 00:09:07.203 response: 00:09:07.203 { 00:09:07.203 "code": -32602, 00:09:07.203 "message": "Invalid cntlid range [65520-65519]" 00:09:07.203 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:07.203 15:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8443 -I 0 00:09:07.460 [2024-07-12 15:46:04.612790] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8443: invalid cntlid range [1-0] 00:09:07.460 15:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:07.460 { 00:09:07.460 "nqn": "nqn.2016-06.io.spdk:cnode8443", 00:09:07.460 "max_cntlid": 0, 00:09:07.460 "method": "nvmf_create_subsystem", 00:09:07.460 "req_id": 1 00:09:07.460 } 00:09:07.460 Got JSON-RPC error response 00:09:07.460 response: 00:09:07.460 { 00:09:07.460 "code": -32602, 00:09:07.460 "message": "Invalid cntlid range [1-0]" 00:09:07.460 }' 00:09:07.460 15:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:07.460 { 00:09:07.460 "nqn": "nqn.2016-06.io.spdk:cnode8443", 00:09:07.460 "max_cntlid": 0, 00:09:07.460 "method": "nvmf_create_subsystem", 00:09:07.460 "req_id": 1 00:09:07.460 } 00:09:07.460 Got JSON-RPC error response 00:09:07.460 response: 00:09:07.460 { 00:09:07.460 "code": -32602, 00:09:07.460 "message": "Invalid cntlid range [1-0]" 00:09:07.460 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:07.460 15:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14123 -I 65520 00:09:07.718 [2024-07-12 15:46:04.857569] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14123: invalid cntlid range [1-65520] 00:09:07.718 15:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:07.718 { 00:09:07.718 "nqn": "nqn.2016-06.io.spdk:cnode14123", 00:09:07.718 "max_cntlid": 65520, 00:09:07.718 "method": "nvmf_create_subsystem", 00:09:07.718 "req_id": 1 00:09:07.718 } 00:09:07.718 Got JSON-RPC error response 00:09:07.718 response: 00:09:07.718 { 00:09:07.718 "code": -32602, 00:09:07.718 "message": "Invalid cntlid range [1-65520]" 00:09:07.718 }' 00:09:07.718 15:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:07.718 { 00:09:07.718 "nqn": "nqn.2016-06.io.spdk:cnode14123", 00:09:07.718 "max_cntlid": 65520, 00:09:07.718 "method": "nvmf_create_subsystem", 00:09:07.718 "req_id": 1 00:09:07.718 } 00:09:07.718 Got JSON-RPC error response 00:09:07.718 response: 00:09:07.718 { 00:09:07.718 "code": -32602, 00:09:07.718 "message": "Invalid cntlid range [1-65520]" 00:09:07.718 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:07.719 15:46:04 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10116 -i 6 -I 5 00:09:07.976 [2024-07-12 15:46:05.094367] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10116: invalid cntlid range [6-5] 00:09:07.976 15:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:07.976 { 00:09:07.976 "nqn": "nqn.2016-06.io.spdk:cnode10116", 00:09:07.976 "min_cntlid": 6, 00:09:07.976 "max_cntlid": 5, 00:09:07.976 "method": "nvmf_create_subsystem", 00:09:07.976 "req_id": 1 00:09:07.976 } 00:09:07.976 Got JSON-RPC error response 00:09:07.976 response: 00:09:07.976 { 00:09:07.976 "code": -32602, 00:09:07.976 "message": "Invalid cntlid range [6-5]" 00:09:07.976 }' 00:09:07.976 15:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:07.976 { 00:09:07.976 "nqn": "nqn.2016-06.io.spdk:cnode10116", 00:09:07.976 "min_cntlid": 6, 00:09:07.976 "max_cntlid": 5, 00:09:07.976 "method": "nvmf_create_subsystem", 00:09:07.976 "req_id": 1 00:09:07.976 } 00:09:07.976 Got JSON-RPC error response 00:09:07.976 response: 00:09:07.976 { 00:09:07.976 "code": -32602, 00:09:07.976 "message": "Invalid cntlid range [6-5]" 00:09:07.976 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:07.976 15:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:07.976 15:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:07.976 { 00:09:07.976 "name": "foobar", 00:09:07.976 "method": "nvmf_delete_target", 00:09:07.976 "req_id": 1 00:09:07.976 } 00:09:07.976 Got JSON-RPC error response 00:09:07.976 response: 00:09:07.976 { 00:09:07.976 "code": -32602, 00:09:07.976 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:07.976 }' 00:09:07.976 15:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:07.976 { 00:09:07.976 "name": "foobar", 00:09:07.976 "method": "nvmf_delete_target", 00:09:07.976 "req_id": 1 00:09:07.976 } 00:09:07.976 Got JSON-RPC error response 00:09:07.976 response: 00:09:07.976 { 00:09:07.976 "code": -32602, 00:09:07.976 "message": "The specified target doesn't exist, cannot delete it." 00:09:07.977 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:07.977 15:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:07.977 15:46:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:07.977 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:07.977 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:07.977 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:07.977 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:07.977 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:07.977 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:07.977 rmmod nvme_tcp 00:09:07.977 rmmod nvme_fabrics 00:09:08.235 rmmod nvme_keyring 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 675560 ']' 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 675560 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 675560 ']' 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 675560 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 675560 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 675560' 00:09:08.235 killing process with pid 675560 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 675560 00:09:08.235 15:46:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 675560 00:09:08.495 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:08.495 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:08.495 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:08.495 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:08.495 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:08.495 15:46:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.495 15:46:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:08.495 15:46:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.402 15:46:07 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:10.402 00:09:10.402 real 0m8.890s 00:09:10.402 user 0m20.507s 00:09:10.402 sys 0m2.538s 00:09:10.402 15:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:10.402 15:46:07 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:10.402 ************************************ 00:09:10.402 END TEST nvmf_invalid 00:09:10.402 ************************************ 00:09:10.402 15:46:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:10.402 15:46:07 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:10.402 15:46:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:10.402 15:46:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.402 15:46:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:10.402 ************************************ 00:09:10.402 START TEST nvmf_abort 00:09:10.402 ************************************ 00:09:10.402 15:46:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:10.660 * Looking for test storage... 00:09:10.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:10.660 15:46:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:12.583 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:12.583 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:12.583 Found net devices under 0000:84:00.0: cvl_0_0 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:12.583 Found net devices under 0000:84:00.1: cvl_0_1 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:12.583 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:12.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:09:12.842 00:09:12.842 --- 10.0.0.2 ping statistics --- 00:09:12.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.842 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:09:12.842 00:09:12.842 --- 10.0.0.1 ping statistics --- 00:09:12.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.842 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=678216 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 678216 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 678216 ']' 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:12.842 15:46:09 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:12.842 [2024-07-12 15:46:09.973174] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:09:12.842 [2024-07-12 15:46:09.973250] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.842 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.842 [2024-07-12 15:46:10.043883] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:13.099 [2024-07-12 15:46:10.158166] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:13.099 [2024-07-12 15:46:10.158226] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:13.100 [2024-07-12 15:46:10.158246] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:13.100 [2024-07-12 15:46:10.158272] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:13.100 [2024-07-12 15:46:10.158288] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:13.100 [2024-07-12 15:46:10.158378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:13.100 [2024-07-12 15:46:10.158443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:13.100 [2024-07-12 15:46:10.158448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:13.100 [2024-07-12 15:46:10.298898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:13.100 Malloc0 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:13.100 Delay0 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:13.100 [2024-07-12 15:46:10.372161] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:13.100 15:46:10 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:13.358 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.358 [2024-07-12 15:46:10.477586] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:15.889 Initializing NVMe Controllers 00:09:15.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:15.889 controller IO queue size 128 less than required 00:09:15.889 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:15.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:15.889 Initialization complete. Launching workers. 00:09:15.889 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33285 00:09:15.889 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33346, failed to submit 62 00:09:15.889 success 33289, unsuccess 57, failed 0 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:15.889 rmmod nvme_tcp 00:09:15.889 rmmod nvme_fabrics 00:09:15.889 rmmod nvme_keyring 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 678216 ']' 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 678216 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 678216 ']' 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 678216 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 678216 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 678216' 00:09:15.889 killing process with pid 678216 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 678216 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 678216 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:15.889 15:46:12 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.792 15:46:14 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:17.792 00:09:17.792 real 0m7.294s 00:09:17.792 user 0m10.435s 00:09:17.792 sys 0m2.590s 00:09:17.792 15:46:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:17.792 15:46:14 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.792 ************************************ 00:09:17.792 END TEST nvmf_abort 00:09:17.792 ************************************ 00:09:17.792 15:46:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:17.792 15:46:14 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:17.792 15:46:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:17.792 15:46:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:17.792 15:46:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:17.792 ************************************ 00:09:17.792 START TEST nvmf_ns_hotplug_stress 00:09:17.792 ************************************ 00:09:17.792 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:17.792 * Looking for test storage... 00:09:17.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:17.792 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:18.052 15:46:15 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:19.956 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:19.957 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:19.957 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:19.957 Found net devices under 0000:84:00.0: cvl_0_0 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:19.957 Found net devices under 0000:84:00.1: cvl_0_1 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:19.957 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:20.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:09:20.215 00:09:20.215 --- 10.0.0.2 ping statistics --- 00:09:20.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.215 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:20.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:09:20.215 00:09:20.215 --- 10.0.0.1 ping statistics --- 00:09:20.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.215 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=680457 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 680457 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 680457 ']' 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.215 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.215 [2024-07-12 15:46:17.385925] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:09:20.215 [2024-07-12 15:46:17.386017] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:20.215 EAL: No free 2048 kB hugepages reported on node 1 00:09:20.215 [2024-07-12 15:46:17.449650] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:20.474 [2024-07-12 15:46:17.558234] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:20.474 [2024-07-12 15:46:17.558297] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:20.474 [2024-07-12 15:46:17.558319] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:20.474 [2024-07-12 15:46:17.558336] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:20.474 [2024-07-12 15:46:17.558349] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:20.474 [2024-07-12 15:46:17.558451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:20.474 [2024-07-12 15:46:17.558522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:20.474 [2024-07-12 15:46:17.558528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.474 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.474 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:09:20.474 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.474 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:20.474 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:20.474 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.474 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:20.474 15:46:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:20.732 [2024-07-12 15:46:17.979637] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.732 15:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:21.018 15:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:21.275 [2024-07-12 15:46:18.534439] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.275 15:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.531 15:46:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:21.788 Malloc0 00:09:21.788 15:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:22.045 Delay0 00:09:22.045 15:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.301 15:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:22.558 NULL1 00:09:22.558 15:46:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:22.815 15:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=680854 00:09:22.815 15:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:22.815 15:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:22.815 15:46:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:22.815 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.181 Read completed with error (sct=0, sc=11) 00:09:24.181 15:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.181 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:24.438 15:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:24.438 15:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:24.695 true 00:09:24.695 15:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:24.695 15:46:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:25.626 15:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.626 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:25.626 15:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:25.626 15:46:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:25.884 true 00:09:25.884 15:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:25.884 15:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.141 15:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.397 15:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:26.397 15:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:26.654 true 00:09:26.654 15:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:26.654 15:46:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.618 15:46:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.618 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.875 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:27.875 15:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:27.875 15:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:28.131 true 00:09:28.131 15:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:28.131 15:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.388 15:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.646 15:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:28.646 15:46:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:28.903 true 00:09:28.903 15:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:28.903 15:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.836 15:46:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.836 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.094 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:30.094 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:30.351 true 00:09:30.351 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:30.351 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.609 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.866 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:30.866 15:46:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:31.123 true 00:09:31.123 15:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:31.123 15:46:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.053 15:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.310 15:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:32.310 15:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:32.310 true 00:09:32.568 15:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:32.568 15:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.826 15:46:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.083 15:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:33.083 15:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:33.341 true 00:09:33.341 15:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:33.341 15:46:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.274 15:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.274 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:34.274 15:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:34.274 15:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:34.532 true 00:09:34.532 15:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:34.532 15:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.789 15:46:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.046 15:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:35.046 15:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:35.303 true 00:09:35.303 15:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:35.303 15:46:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.235 15:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.492 15:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:36.492 15:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:36.749 true 00:09:36.749 15:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:36.749 15:46:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.006 15:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.310 15:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:37.310 15:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:37.580 true 00:09:37.580 15:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:37.580 15:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.580 15:46:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.838 15:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:37.838 15:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:38.096 true 00:09:38.096 15:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:38.096 15:46:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.469 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:39.469 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:39.469 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:39.727 true 00:09:39.727 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:39.727 15:46:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.985 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.242 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:40.242 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:40.500 true 00:09:40.500 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:40.500 15:46:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.758 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.016 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:41.016 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:41.273 true 00:09:41.273 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:41.273 15:46:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.645 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.645 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.645 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:42.645 15:46:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:42.903 true 00:09:42.903 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:42.903 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.160 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.418 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:43.418 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:43.676 true 00:09:43.676 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:43.676 15:46:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.609 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.609 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:44.867 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:44.867 15:46:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:45.125 true 00:09:45.125 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:45.125 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.382 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.640 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:45.640 15:46:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:45.896 true 00:09:45.896 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:45.896 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.828 15:46:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.828 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:46.828 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:47.085 true 00:09:47.085 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:47.085 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.342 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.599 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:47.599 15:46:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:47.855 true 00:09:47.855 15:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:47.855 15:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.787 15:46:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:48.787 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:49.043 15:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:49.043 15:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:49.300 true 00:09:49.300 15:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:49.300 15:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.556 15:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:49.813 15:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:49.813 15:46:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:50.068 true 00:09:50.069 15:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:50.069 15:46:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:51.000 15:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.258 15:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:51.258 15:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:51.515 true 00:09:51.515 15:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:51.515 15:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.773 15:46:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.773 15:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:51.773 15:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:52.030 true 00:09:52.288 15:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:52.288 15:46:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.221 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.221 15:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.221 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:53.221 Initializing NVMe Controllers 00:09:53.221 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:53.221 Controller IO queue size 128, less than required. 00:09:53.221 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:53.221 Controller IO queue size 128, less than required. 00:09:53.221 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:53.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:53.221 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:53.221 Initialization complete. Launching workers. 00:09:53.221 ======================================================== 00:09:53.221 Latency(us) 00:09:53.221 Device Information : IOPS MiB/s Average min max 00:09:53.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 941.86 0.46 71795.95 2973.25 1082166.39 00:09:53.221 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11323.35 5.53 11291.86 1626.17 361877.37 00:09:53.221 ======================================================== 00:09:53.221 Total : 12265.22 5.99 15938.05 1626.17 1082166.39 00:09:53.221 00:09:53.479 15:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:53.479 15:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:53.479 true 00:09:53.479 15:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 680854 00:09:53.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (680854) - No such process 00:09:53.479 15:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 680854 00:09:53.479 15:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.736 15:46:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:53.997 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:53.997 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:53.997 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:53.997 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:53.997 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:54.256 null0 00:09:54.256 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:54.256 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:54.256 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:54.513 null1 00:09:54.513 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:54.513 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:54.513 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:54.771 null2 00:09:54.771 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:54.771 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:54.771 15:46:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:55.029 null3 00:09:55.029 15:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:55.029 15:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:55.029 15:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:55.286 null4 00:09:55.286 15:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:55.286 15:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:55.286 15:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:55.543 null5 00:09:55.543 15:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:55.543 15:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:55.543 15:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:55.800 null6 00:09:55.800 15:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:55.800 15:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:55.800 15:46:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:56.057 null7 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 684804 684805 684807 684809 684811 684813 684815 684817 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.057 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:56.316 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.316 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:56.316 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:56.316 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:56.316 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:56.316 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:56.316 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:56.316 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:56.575 15:46:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:56.833 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.833 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:56.833 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:56.833 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:56.833 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:56.833 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:56.833 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:56.833 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.092 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:57.350 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.350 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:57.350 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:57.350 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:57.350 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:57.350 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:57.350 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:57.350 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:57.608 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:57.609 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.609 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.609 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:57.609 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:57.609 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:57.609 15:46:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:57.867 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:57.867 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:57.867 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.867 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:57.867 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:58.125 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:58.125 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:58.125 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:58.383 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.384 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.384 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:58.384 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.384 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.384 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:58.384 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.384 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.384 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:58.641 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:58.641 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:58.641 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:58.641 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:58.641 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.641 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:58.641 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:58.641 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.899 15:46:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:58.899 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:58.899 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:58.899 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:59.201 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:59.202 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:59.202 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:59.202 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:59.202 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:59.202 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.202 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:59.202 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:59.489 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:59.745 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:59.745 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:59.745 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:59.745 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:59.745 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:59.745 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:59.745 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:59.745 15:46:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.002 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:00.260 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:00.260 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:00.260 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:00.260 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:00.260 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:00.260 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:00.260 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.260 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:00.517 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.517 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.517 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:00.517 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.517 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.518 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:00.775 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:00.775 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:00.775 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:00.776 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:00.776 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:00.776 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:00.776 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.776 15:46:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.033 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:01.291 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:01.291 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:01.291 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:01.291 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:01.291 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:01.291 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:01.291 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:01.291 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.549 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:01.550 rmmod nvme_tcp 00:10:01.550 rmmod nvme_fabrics 00:10:01.550 rmmod nvme_keyring 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 680457 ']' 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 680457 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 680457 ']' 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 680457 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 680457 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 680457' 00:10:01.550 killing process with pid 680457 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 680457 00:10:01.550 15:46:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 680457 00:10:01.808 15:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:01.808 15:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:01.808 15:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:01.808 15:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:01.808 15:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:01.808 15:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.808 15:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:01.808 15:46:59 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.366 15:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:04.366 00:10:04.366 real 0m46.094s 00:10:04.366 user 3m29.714s 00:10:04.366 sys 0m16.509s 00:10:04.366 15:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.366 15:47:01 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:04.366 ************************************ 00:10:04.366 END TEST nvmf_ns_hotplug_stress 00:10:04.366 ************************************ 00:10:04.366 15:47:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:04.366 15:47:01 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:04.366 15:47:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:04.366 15:47:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.366 15:47:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:04.366 ************************************ 00:10:04.366 START TEST nvmf_connect_stress 00:10:04.366 ************************************ 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:04.366 * Looking for test storage... 00:10:04.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.366 15:47:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:04.367 15:47:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.270 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:06.271 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:06.271 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:06.271 Found net devices under 0000:84:00.0: cvl_0_0 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:06.271 Found net devices under 0000:84:00.1: cvl_0_1 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:10:06.271 00:10:06.271 --- 10.0.0.2 ping statistics --- 00:10:06.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.271 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:10:06.271 00:10:06.271 --- 10.0.0.1 ping statistics --- 00:10:06.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.271 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=687587 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 687587 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 687587 ']' 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.271 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.271 [2024-07-12 15:47:03.422664] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:10:06.271 [2024-07-12 15:47:03.422751] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.271 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.271 [2024-07-12 15:47:03.482804] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:06.529 [2024-07-12 15:47:03.585095] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.529 [2024-07-12 15:47:03.585158] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.529 [2024-07-12 15:47:03.585185] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.529 [2024-07-12 15:47:03.585196] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.529 [2024-07-12 15:47:03.585205] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.529 [2024-07-12 15:47:03.585295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.529 [2024-07-12 15:47:03.585325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.529 [2024-07-12 15:47:03.585327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.529 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.529 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:10:06.529 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:06.529 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:06.529 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.529 15:47:03 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.529 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:06.529 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.529 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.530 [2024-07-12 15:47:03.731930] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.530 [2024-07-12 15:47:03.771917] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:06.530 NULL1 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=687724 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.530 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.787 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.787 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.787 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.787 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.787 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.787 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.787 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:06.787 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:06.787 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:06.787 15:47:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:06.787 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.787 15:47:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.045 15:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.045 15:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:07.045 15:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.045 15:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.045 15:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.302 15:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.302 15:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:07.302 15:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.302 15:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.302 15:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:07.560 15:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.560 15:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:07.560 15:47:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:07.560 15:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.560 15:47:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.124 15:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.124 15:47:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:08.124 15:47:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.124 15:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.124 15:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.382 15:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.382 15:47:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:08.382 15:47:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.382 15:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.382 15:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.639 15:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.639 15:47:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:08.639 15:47:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.639 15:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.639 15:47:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:08.897 15:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:08.897 15:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:08.897 15:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:08.897 15:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:08.897 15:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.155 15:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.155 15:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:09.155 15:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:09.155 15:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.155 15:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.720 15:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.720 15:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:09.720 15:47:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:09.720 15:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.720 15:47:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.977 15:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.977 15:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:09.977 15:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:09.977 15:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.977 15:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.234 15:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.234 15:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:10.234 15:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.234 15:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.234 15:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.492 15:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.492 15:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:10.492 15:47:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.492 15:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.492 15:47:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:10.749 15:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:10.749 15:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:10.749 15:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:10.749 15:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:10.749 15:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.315 15:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.315 15:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:11.315 15:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:11.315 15:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.315 15:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.573 15:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.573 15:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:11.573 15:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:11.573 15:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.573 15:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:11.830 15:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.830 15:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:11.830 15:47:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:11.830 15:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.830 15:47:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.088 15:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.088 15:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:12.088 15:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:12.088 15:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.088 15:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.346 15:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.346 15:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:12.346 15:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:12.346 15:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.346 15:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:12.911 15:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.911 15:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:12.911 15:47:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:12.911 15:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.911 15:47:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.169 15:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.169 15:47:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:13.169 15:47:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:13.169 15:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.169 15:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.445 15:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.445 15:47:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:13.445 15:47:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:13.445 15:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.445 15:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.703 15:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.703 15:47:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:13.703 15:47:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:13.703 15:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.703 15:47:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:13.959 15:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.959 15:47:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:13.959 15:47:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:13.959 15:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.959 15:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.521 15:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.521 15:47:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:14.521 15:47:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:14.521 15:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.521 15:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:14.778 15:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.778 15:47:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:14.778 15:47:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:14.778 15:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.778 15:47:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.035 15:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.035 15:47:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:15.035 15:47:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.035 15:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.035 15:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.291 15:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.291 15:47:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:15.291 15:47:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.291 15:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.291 15:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:15.546 15:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.546 15:47:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:15.546 15:47:12 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:15.546 15:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.546 15:47:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.110 15:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.110 15:47:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:16.110 15:47:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.110 15:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.110 15:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.366 15:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.367 15:47:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:16.367 15:47:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.367 15:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.367 15:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.624 15:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.624 15:47:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:16.624 15:47:13 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:16.624 15:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.624 15:47:13 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:16.624 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 687724 00:10:16.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (687724) - No such process 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 687724 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:16.882 rmmod nvme_tcp 00:10:16.882 rmmod nvme_fabrics 00:10:16.882 rmmod nvme_keyring 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 687587 ']' 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 687587 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 687587 ']' 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 687587 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:16.882 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 687587 00:10:17.141 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:17.141 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:17.141 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 687587' 00:10:17.141 killing process with pid 687587 00:10:17.141 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 687587 00:10:17.141 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 687587 00:10:17.401 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:17.401 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:17.401 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:17.401 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:17.401 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:17.401 15:47:14 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.401 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:17.401 15:47:14 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.310 15:47:16 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:19.310 00:10:19.310 real 0m15.322s 00:10:19.310 user 0m37.954s 00:10:19.310 sys 0m6.321s 00:10:19.310 15:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.310 15:47:16 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:19.310 ************************************ 00:10:19.310 END TEST nvmf_connect_stress 00:10:19.310 ************************************ 00:10:19.310 15:47:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:19.310 15:47:16 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:19.310 15:47:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:19.310 15:47:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.310 15:47:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:19.310 ************************************ 00:10:19.310 START TEST nvmf_fused_ordering 00:10:19.310 ************************************ 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:19.310 * Looking for test storage... 00:10:19.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.310 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:19.569 15:47:16 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:21.474 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:21.474 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:21.474 Found net devices under 0000:84:00.0: cvl_0_0 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:21.474 Found net devices under 0000:84:00.1: cvl_0_1 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.474 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.732 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.732 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.732 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:21.732 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.732 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.732 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.732 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:21.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:10:21.732 00:10:21.732 --- 10.0.0.2 ping statistics --- 00:10:21.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.732 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:10:21.732 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:10:21.732 00:10:21.732 --- 10.0.0.1 ping statistics --- 00:10:21.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.732 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:10:21.732 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.732 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:10:21.732 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.732 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=690895 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 690895 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 690895 ']' 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:21.733 15:47:18 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.733 [2024-07-12 15:47:18.929430] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:10:21.733 [2024-07-12 15:47:18.929524] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.733 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.733 [2024-07-12 15:47:18.997760] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.991 [2024-07-12 15:47:19.109863] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.991 [2024-07-12 15:47:19.109918] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.991 [2024-07-12 15:47:19.109948] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.991 [2024-07-12 15:47:19.109960] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.991 [2024-07-12 15:47:19.109971] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.991 [2024-07-12 15:47:19.110005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.991 [2024-07-12 15:47:19.256632] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:21.991 [2024-07-12 15:47:19.272848] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.991 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:22.278 NULL1 00:10:22.278 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.278 15:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:10:22.278 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.278 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:22.278 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.278 15:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:22.278 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.278 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:22.278 15:47:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.278 15:47:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:22.278 [2024-07-12 15:47:19.320669] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:10:22.278 [2024-07-12 15:47:19.320711] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690928 ] 00:10:22.278 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.846 Attached to nqn.2016-06.io.spdk:cnode1 00:10:22.846 Namespace ID: 1 size: 1GB 00:10:22.846 fused_ordering(0) 00:10:22.846 fused_ordering(1) 00:10:22.846 fused_ordering(2) 00:10:22.846 fused_ordering(3) 00:10:22.846 fused_ordering(4) 00:10:22.846 fused_ordering(5) 00:10:22.846 fused_ordering(6) 00:10:22.846 fused_ordering(7) 00:10:22.846 fused_ordering(8) 00:10:22.846 fused_ordering(9) 00:10:22.846 fused_ordering(10) 00:10:22.846 fused_ordering(11) 00:10:22.846 fused_ordering(12) 00:10:22.846 fused_ordering(13) 00:10:22.846 fused_ordering(14) 00:10:22.846 fused_ordering(15) 00:10:22.846 fused_ordering(16) 00:10:22.846 fused_ordering(17) 00:10:22.846 fused_ordering(18) 00:10:22.846 fused_ordering(19) 00:10:22.846 fused_ordering(20) 00:10:22.846 fused_ordering(21) 00:10:22.846 fused_ordering(22) 00:10:22.846 fused_ordering(23) 00:10:22.846 fused_ordering(24) 00:10:22.846 fused_ordering(25) 00:10:22.846 fused_ordering(26) 00:10:22.846 fused_ordering(27) 00:10:22.846 fused_ordering(28) 00:10:22.846 fused_ordering(29) 00:10:22.846 fused_ordering(30) 00:10:22.846 fused_ordering(31) 00:10:22.846 fused_ordering(32) 00:10:22.846 fused_ordering(33) 00:10:22.846 fused_ordering(34) 00:10:22.846 fused_ordering(35) 00:10:22.846 fused_ordering(36) 00:10:22.846 fused_ordering(37) 00:10:22.846 fused_ordering(38) 00:10:22.846 fused_ordering(39) 00:10:22.846 fused_ordering(40) 00:10:22.846 fused_ordering(41) 00:10:22.846 fused_ordering(42) 00:10:22.846 fused_ordering(43) 00:10:22.846 fused_ordering(44) 00:10:22.846 fused_ordering(45) 00:10:22.846 fused_ordering(46) 00:10:22.846 fused_ordering(47) 00:10:22.846 fused_ordering(48) 00:10:22.846 fused_ordering(49) 00:10:22.846 fused_ordering(50) 00:10:22.846 fused_ordering(51) 00:10:22.846 fused_ordering(52) 00:10:22.846 fused_ordering(53) 00:10:22.846 fused_ordering(54) 00:10:22.846 fused_ordering(55) 00:10:22.846 fused_ordering(56) 00:10:22.846 fused_ordering(57) 00:10:22.846 fused_ordering(58) 00:10:22.846 fused_ordering(59) 00:10:22.846 fused_ordering(60) 00:10:22.846 fused_ordering(61) 00:10:22.846 fused_ordering(62) 00:10:22.846 fused_ordering(63) 00:10:22.846 fused_ordering(64) 00:10:22.846 fused_ordering(65) 00:10:22.846 fused_ordering(66) 00:10:22.846 fused_ordering(67) 00:10:22.846 fused_ordering(68) 00:10:22.846 fused_ordering(69) 00:10:22.846 fused_ordering(70) 00:10:22.846 fused_ordering(71) 00:10:22.846 fused_ordering(72) 00:10:22.846 fused_ordering(73) 00:10:22.846 fused_ordering(74) 00:10:22.846 fused_ordering(75) 00:10:22.846 fused_ordering(76) 00:10:22.846 fused_ordering(77) 00:10:22.846 fused_ordering(78) 00:10:22.846 fused_ordering(79) 00:10:22.846 fused_ordering(80) 00:10:22.846 fused_ordering(81) 00:10:22.846 fused_ordering(82) 00:10:22.846 fused_ordering(83) 00:10:22.846 fused_ordering(84) 00:10:22.846 fused_ordering(85) 00:10:22.846 fused_ordering(86) 00:10:22.846 fused_ordering(87) 00:10:22.846 fused_ordering(88) 00:10:22.846 fused_ordering(89) 00:10:22.846 fused_ordering(90) 00:10:22.846 fused_ordering(91) 00:10:22.846 fused_ordering(92) 00:10:22.846 fused_ordering(93) 00:10:22.846 fused_ordering(94) 00:10:22.846 fused_ordering(95) 00:10:22.846 fused_ordering(96) 00:10:22.846 fused_ordering(97) 00:10:22.846 fused_ordering(98) 00:10:22.846 fused_ordering(99) 00:10:22.846 fused_ordering(100) 00:10:22.846 fused_ordering(101) 00:10:22.846 fused_ordering(102) 00:10:22.846 fused_ordering(103) 00:10:22.846 fused_ordering(104) 00:10:22.846 fused_ordering(105) 00:10:22.846 fused_ordering(106) 00:10:22.846 fused_ordering(107) 00:10:22.846 fused_ordering(108) 00:10:22.846 fused_ordering(109) 00:10:22.846 fused_ordering(110) 00:10:22.846 fused_ordering(111) 00:10:22.846 fused_ordering(112) 00:10:22.846 fused_ordering(113) 00:10:22.846 fused_ordering(114) 00:10:22.846 fused_ordering(115) 00:10:22.846 fused_ordering(116) 00:10:22.846 fused_ordering(117) 00:10:22.846 fused_ordering(118) 00:10:22.846 fused_ordering(119) 00:10:22.846 fused_ordering(120) 00:10:22.846 fused_ordering(121) 00:10:22.846 fused_ordering(122) 00:10:22.846 fused_ordering(123) 00:10:22.846 fused_ordering(124) 00:10:22.846 fused_ordering(125) 00:10:22.846 fused_ordering(126) 00:10:22.846 fused_ordering(127) 00:10:22.846 fused_ordering(128) 00:10:22.846 fused_ordering(129) 00:10:22.846 fused_ordering(130) 00:10:22.846 fused_ordering(131) 00:10:22.846 fused_ordering(132) 00:10:22.846 fused_ordering(133) 00:10:22.846 fused_ordering(134) 00:10:22.846 fused_ordering(135) 00:10:22.846 fused_ordering(136) 00:10:22.846 fused_ordering(137) 00:10:22.846 fused_ordering(138) 00:10:22.846 fused_ordering(139) 00:10:22.846 fused_ordering(140) 00:10:22.846 fused_ordering(141) 00:10:22.846 fused_ordering(142) 00:10:22.846 fused_ordering(143) 00:10:22.846 fused_ordering(144) 00:10:22.846 fused_ordering(145) 00:10:22.846 fused_ordering(146) 00:10:22.846 fused_ordering(147) 00:10:22.846 fused_ordering(148) 00:10:22.846 fused_ordering(149) 00:10:22.846 fused_ordering(150) 00:10:22.846 fused_ordering(151) 00:10:22.846 fused_ordering(152) 00:10:22.846 fused_ordering(153) 00:10:22.846 fused_ordering(154) 00:10:22.846 fused_ordering(155) 00:10:22.846 fused_ordering(156) 00:10:22.846 fused_ordering(157) 00:10:22.846 fused_ordering(158) 00:10:22.846 fused_ordering(159) 00:10:22.846 fused_ordering(160) 00:10:22.846 fused_ordering(161) 00:10:22.846 fused_ordering(162) 00:10:22.846 fused_ordering(163) 00:10:22.846 fused_ordering(164) 00:10:22.846 fused_ordering(165) 00:10:22.846 fused_ordering(166) 00:10:22.846 fused_ordering(167) 00:10:22.846 fused_ordering(168) 00:10:22.846 fused_ordering(169) 00:10:22.846 fused_ordering(170) 00:10:22.846 fused_ordering(171) 00:10:22.846 fused_ordering(172) 00:10:22.846 fused_ordering(173) 00:10:22.846 fused_ordering(174) 00:10:22.846 fused_ordering(175) 00:10:22.846 fused_ordering(176) 00:10:22.846 fused_ordering(177) 00:10:22.846 fused_ordering(178) 00:10:22.846 fused_ordering(179) 00:10:22.846 fused_ordering(180) 00:10:22.846 fused_ordering(181) 00:10:22.846 fused_ordering(182) 00:10:22.846 fused_ordering(183) 00:10:22.846 fused_ordering(184) 00:10:22.846 fused_ordering(185) 00:10:22.846 fused_ordering(186) 00:10:22.847 fused_ordering(187) 00:10:22.847 fused_ordering(188) 00:10:22.847 fused_ordering(189) 00:10:22.847 fused_ordering(190) 00:10:22.847 fused_ordering(191) 00:10:22.847 fused_ordering(192) 00:10:22.847 fused_ordering(193) 00:10:22.847 fused_ordering(194) 00:10:22.847 fused_ordering(195) 00:10:22.847 fused_ordering(196) 00:10:22.847 fused_ordering(197) 00:10:22.847 fused_ordering(198) 00:10:22.847 fused_ordering(199) 00:10:22.847 fused_ordering(200) 00:10:22.847 fused_ordering(201) 00:10:22.847 fused_ordering(202) 00:10:22.847 fused_ordering(203) 00:10:22.847 fused_ordering(204) 00:10:22.847 fused_ordering(205) 00:10:23.106 fused_ordering(206) 00:10:23.106 fused_ordering(207) 00:10:23.106 fused_ordering(208) 00:10:23.106 fused_ordering(209) 00:10:23.106 fused_ordering(210) 00:10:23.106 fused_ordering(211) 00:10:23.106 fused_ordering(212) 00:10:23.106 fused_ordering(213) 00:10:23.106 fused_ordering(214) 00:10:23.106 fused_ordering(215) 00:10:23.106 fused_ordering(216) 00:10:23.106 fused_ordering(217) 00:10:23.106 fused_ordering(218) 00:10:23.106 fused_ordering(219) 00:10:23.106 fused_ordering(220) 00:10:23.106 fused_ordering(221) 00:10:23.106 fused_ordering(222) 00:10:23.107 fused_ordering(223) 00:10:23.107 fused_ordering(224) 00:10:23.107 fused_ordering(225) 00:10:23.107 fused_ordering(226) 00:10:23.107 fused_ordering(227) 00:10:23.107 fused_ordering(228) 00:10:23.107 fused_ordering(229) 00:10:23.107 fused_ordering(230) 00:10:23.107 fused_ordering(231) 00:10:23.107 fused_ordering(232) 00:10:23.107 fused_ordering(233) 00:10:23.107 fused_ordering(234) 00:10:23.107 fused_ordering(235) 00:10:23.107 fused_ordering(236) 00:10:23.107 fused_ordering(237) 00:10:23.107 fused_ordering(238) 00:10:23.107 fused_ordering(239) 00:10:23.107 fused_ordering(240) 00:10:23.107 fused_ordering(241) 00:10:23.107 fused_ordering(242) 00:10:23.107 fused_ordering(243) 00:10:23.107 fused_ordering(244) 00:10:23.107 fused_ordering(245) 00:10:23.107 fused_ordering(246) 00:10:23.107 fused_ordering(247) 00:10:23.107 fused_ordering(248) 00:10:23.107 fused_ordering(249) 00:10:23.107 fused_ordering(250) 00:10:23.107 fused_ordering(251) 00:10:23.107 fused_ordering(252) 00:10:23.107 fused_ordering(253) 00:10:23.107 fused_ordering(254) 00:10:23.107 fused_ordering(255) 00:10:23.107 fused_ordering(256) 00:10:23.107 fused_ordering(257) 00:10:23.107 fused_ordering(258) 00:10:23.107 fused_ordering(259) 00:10:23.107 fused_ordering(260) 00:10:23.107 fused_ordering(261) 00:10:23.107 fused_ordering(262) 00:10:23.107 fused_ordering(263) 00:10:23.107 fused_ordering(264) 00:10:23.107 fused_ordering(265) 00:10:23.107 fused_ordering(266) 00:10:23.107 fused_ordering(267) 00:10:23.107 fused_ordering(268) 00:10:23.107 fused_ordering(269) 00:10:23.107 fused_ordering(270) 00:10:23.107 fused_ordering(271) 00:10:23.107 fused_ordering(272) 00:10:23.107 fused_ordering(273) 00:10:23.107 fused_ordering(274) 00:10:23.107 fused_ordering(275) 00:10:23.107 fused_ordering(276) 00:10:23.107 fused_ordering(277) 00:10:23.107 fused_ordering(278) 00:10:23.107 fused_ordering(279) 00:10:23.107 fused_ordering(280) 00:10:23.107 fused_ordering(281) 00:10:23.107 fused_ordering(282) 00:10:23.107 fused_ordering(283) 00:10:23.107 fused_ordering(284) 00:10:23.107 fused_ordering(285) 00:10:23.107 fused_ordering(286) 00:10:23.107 fused_ordering(287) 00:10:23.107 fused_ordering(288) 00:10:23.107 fused_ordering(289) 00:10:23.107 fused_ordering(290) 00:10:23.107 fused_ordering(291) 00:10:23.107 fused_ordering(292) 00:10:23.107 fused_ordering(293) 00:10:23.107 fused_ordering(294) 00:10:23.107 fused_ordering(295) 00:10:23.107 fused_ordering(296) 00:10:23.107 fused_ordering(297) 00:10:23.107 fused_ordering(298) 00:10:23.107 fused_ordering(299) 00:10:23.107 fused_ordering(300) 00:10:23.107 fused_ordering(301) 00:10:23.107 fused_ordering(302) 00:10:23.107 fused_ordering(303) 00:10:23.107 fused_ordering(304) 00:10:23.107 fused_ordering(305) 00:10:23.107 fused_ordering(306) 00:10:23.107 fused_ordering(307) 00:10:23.107 fused_ordering(308) 00:10:23.107 fused_ordering(309) 00:10:23.107 fused_ordering(310) 00:10:23.107 fused_ordering(311) 00:10:23.107 fused_ordering(312) 00:10:23.107 fused_ordering(313) 00:10:23.107 fused_ordering(314) 00:10:23.107 fused_ordering(315) 00:10:23.107 fused_ordering(316) 00:10:23.107 fused_ordering(317) 00:10:23.107 fused_ordering(318) 00:10:23.107 fused_ordering(319) 00:10:23.107 fused_ordering(320) 00:10:23.107 fused_ordering(321) 00:10:23.107 fused_ordering(322) 00:10:23.107 fused_ordering(323) 00:10:23.107 fused_ordering(324) 00:10:23.107 fused_ordering(325) 00:10:23.107 fused_ordering(326) 00:10:23.107 fused_ordering(327) 00:10:23.107 fused_ordering(328) 00:10:23.107 fused_ordering(329) 00:10:23.107 fused_ordering(330) 00:10:23.107 fused_ordering(331) 00:10:23.107 fused_ordering(332) 00:10:23.107 fused_ordering(333) 00:10:23.107 fused_ordering(334) 00:10:23.107 fused_ordering(335) 00:10:23.107 fused_ordering(336) 00:10:23.107 fused_ordering(337) 00:10:23.107 fused_ordering(338) 00:10:23.107 fused_ordering(339) 00:10:23.107 fused_ordering(340) 00:10:23.107 fused_ordering(341) 00:10:23.107 fused_ordering(342) 00:10:23.107 fused_ordering(343) 00:10:23.107 fused_ordering(344) 00:10:23.107 fused_ordering(345) 00:10:23.107 fused_ordering(346) 00:10:23.107 fused_ordering(347) 00:10:23.107 fused_ordering(348) 00:10:23.107 fused_ordering(349) 00:10:23.107 fused_ordering(350) 00:10:23.107 fused_ordering(351) 00:10:23.107 fused_ordering(352) 00:10:23.107 fused_ordering(353) 00:10:23.107 fused_ordering(354) 00:10:23.107 fused_ordering(355) 00:10:23.107 fused_ordering(356) 00:10:23.107 fused_ordering(357) 00:10:23.107 fused_ordering(358) 00:10:23.107 fused_ordering(359) 00:10:23.107 fused_ordering(360) 00:10:23.107 fused_ordering(361) 00:10:23.107 fused_ordering(362) 00:10:23.107 fused_ordering(363) 00:10:23.107 fused_ordering(364) 00:10:23.107 fused_ordering(365) 00:10:23.107 fused_ordering(366) 00:10:23.107 fused_ordering(367) 00:10:23.107 fused_ordering(368) 00:10:23.107 fused_ordering(369) 00:10:23.107 fused_ordering(370) 00:10:23.107 fused_ordering(371) 00:10:23.107 fused_ordering(372) 00:10:23.107 fused_ordering(373) 00:10:23.107 fused_ordering(374) 00:10:23.107 fused_ordering(375) 00:10:23.107 fused_ordering(376) 00:10:23.107 fused_ordering(377) 00:10:23.107 fused_ordering(378) 00:10:23.107 fused_ordering(379) 00:10:23.107 fused_ordering(380) 00:10:23.107 fused_ordering(381) 00:10:23.107 fused_ordering(382) 00:10:23.107 fused_ordering(383) 00:10:23.107 fused_ordering(384) 00:10:23.107 fused_ordering(385) 00:10:23.107 fused_ordering(386) 00:10:23.107 fused_ordering(387) 00:10:23.107 fused_ordering(388) 00:10:23.107 fused_ordering(389) 00:10:23.107 fused_ordering(390) 00:10:23.107 fused_ordering(391) 00:10:23.107 fused_ordering(392) 00:10:23.107 fused_ordering(393) 00:10:23.107 fused_ordering(394) 00:10:23.107 fused_ordering(395) 00:10:23.107 fused_ordering(396) 00:10:23.107 fused_ordering(397) 00:10:23.107 fused_ordering(398) 00:10:23.107 fused_ordering(399) 00:10:23.107 fused_ordering(400) 00:10:23.107 fused_ordering(401) 00:10:23.107 fused_ordering(402) 00:10:23.107 fused_ordering(403) 00:10:23.107 fused_ordering(404) 00:10:23.107 fused_ordering(405) 00:10:23.107 fused_ordering(406) 00:10:23.107 fused_ordering(407) 00:10:23.107 fused_ordering(408) 00:10:23.107 fused_ordering(409) 00:10:23.107 fused_ordering(410) 00:10:23.365 fused_ordering(411) 00:10:23.365 fused_ordering(412) 00:10:23.365 fused_ordering(413) 00:10:23.365 fused_ordering(414) 00:10:23.365 fused_ordering(415) 00:10:23.365 fused_ordering(416) 00:10:23.365 fused_ordering(417) 00:10:23.365 fused_ordering(418) 00:10:23.365 fused_ordering(419) 00:10:23.365 fused_ordering(420) 00:10:23.365 fused_ordering(421) 00:10:23.365 fused_ordering(422) 00:10:23.365 fused_ordering(423) 00:10:23.365 fused_ordering(424) 00:10:23.365 fused_ordering(425) 00:10:23.365 fused_ordering(426) 00:10:23.365 fused_ordering(427) 00:10:23.365 fused_ordering(428) 00:10:23.365 fused_ordering(429) 00:10:23.365 fused_ordering(430) 00:10:23.365 fused_ordering(431) 00:10:23.365 fused_ordering(432) 00:10:23.365 fused_ordering(433) 00:10:23.365 fused_ordering(434) 00:10:23.365 fused_ordering(435) 00:10:23.365 fused_ordering(436) 00:10:23.365 fused_ordering(437) 00:10:23.365 fused_ordering(438) 00:10:23.365 fused_ordering(439) 00:10:23.365 fused_ordering(440) 00:10:23.365 fused_ordering(441) 00:10:23.365 fused_ordering(442) 00:10:23.365 fused_ordering(443) 00:10:23.365 fused_ordering(444) 00:10:23.365 fused_ordering(445) 00:10:23.365 fused_ordering(446) 00:10:23.365 fused_ordering(447) 00:10:23.365 fused_ordering(448) 00:10:23.365 fused_ordering(449) 00:10:23.365 fused_ordering(450) 00:10:23.365 fused_ordering(451) 00:10:23.365 fused_ordering(452) 00:10:23.365 fused_ordering(453) 00:10:23.365 fused_ordering(454) 00:10:23.365 fused_ordering(455) 00:10:23.365 fused_ordering(456) 00:10:23.365 fused_ordering(457) 00:10:23.365 fused_ordering(458) 00:10:23.365 fused_ordering(459) 00:10:23.365 fused_ordering(460) 00:10:23.365 fused_ordering(461) 00:10:23.365 fused_ordering(462) 00:10:23.365 fused_ordering(463) 00:10:23.365 fused_ordering(464) 00:10:23.365 fused_ordering(465) 00:10:23.365 fused_ordering(466) 00:10:23.365 fused_ordering(467) 00:10:23.365 fused_ordering(468) 00:10:23.365 fused_ordering(469) 00:10:23.365 fused_ordering(470) 00:10:23.365 fused_ordering(471) 00:10:23.365 fused_ordering(472) 00:10:23.365 fused_ordering(473) 00:10:23.365 fused_ordering(474) 00:10:23.365 fused_ordering(475) 00:10:23.365 fused_ordering(476) 00:10:23.365 fused_ordering(477) 00:10:23.365 fused_ordering(478) 00:10:23.365 fused_ordering(479) 00:10:23.365 fused_ordering(480) 00:10:23.365 fused_ordering(481) 00:10:23.365 fused_ordering(482) 00:10:23.365 fused_ordering(483) 00:10:23.365 fused_ordering(484) 00:10:23.365 fused_ordering(485) 00:10:23.365 fused_ordering(486) 00:10:23.365 fused_ordering(487) 00:10:23.365 fused_ordering(488) 00:10:23.365 fused_ordering(489) 00:10:23.365 fused_ordering(490) 00:10:23.365 fused_ordering(491) 00:10:23.365 fused_ordering(492) 00:10:23.365 fused_ordering(493) 00:10:23.365 fused_ordering(494) 00:10:23.365 fused_ordering(495) 00:10:23.365 fused_ordering(496) 00:10:23.365 fused_ordering(497) 00:10:23.365 fused_ordering(498) 00:10:23.365 fused_ordering(499) 00:10:23.365 fused_ordering(500) 00:10:23.365 fused_ordering(501) 00:10:23.365 fused_ordering(502) 00:10:23.365 fused_ordering(503) 00:10:23.365 fused_ordering(504) 00:10:23.365 fused_ordering(505) 00:10:23.365 fused_ordering(506) 00:10:23.365 fused_ordering(507) 00:10:23.365 fused_ordering(508) 00:10:23.365 fused_ordering(509) 00:10:23.365 fused_ordering(510) 00:10:23.365 fused_ordering(511) 00:10:23.365 fused_ordering(512) 00:10:23.365 fused_ordering(513) 00:10:23.365 fused_ordering(514) 00:10:23.365 fused_ordering(515) 00:10:23.365 fused_ordering(516) 00:10:23.365 fused_ordering(517) 00:10:23.365 fused_ordering(518) 00:10:23.365 fused_ordering(519) 00:10:23.365 fused_ordering(520) 00:10:23.365 fused_ordering(521) 00:10:23.365 fused_ordering(522) 00:10:23.365 fused_ordering(523) 00:10:23.365 fused_ordering(524) 00:10:23.365 fused_ordering(525) 00:10:23.365 fused_ordering(526) 00:10:23.365 fused_ordering(527) 00:10:23.365 fused_ordering(528) 00:10:23.365 fused_ordering(529) 00:10:23.365 fused_ordering(530) 00:10:23.365 fused_ordering(531) 00:10:23.365 fused_ordering(532) 00:10:23.365 fused_ordering(533) 00:10:23.365 fused_ordering(534) 00:10:23.365 fused_ordering(535) 00:10:23.365 fused_ordering(536) 00:10:23.365 fused_ordering(537) 00:10:23.365 fused_ordering(538) 00:10:23.365 fused_ordering(539) 00:10:23.365 fused_ordering(540) 00:10:23.365 fused_ordering(541) 00:10:23.365 fused_ordering(542) 00:10:23.365 fused_ordering(543) 00:10:23.365 fused_ordering(544) 00:10:23.365 fused_ordering(545) 00:10:23.365 fused_ordering(546) 00:10:23.365 fused_ordering(547) 00:10:23.365 fused_ordering(548) 00:10:23.365 fused_ordering(549) 00:10:23.365 fused_ordering(550) 00:10:23.365 fused_ordering(551) 00:10:23.365 fused_ordering(552) 00:10:23.365 fused_ordering(553) 00:10:23.365 fused_ordering(554) 00:10:23.365 fused_ordering(555) 00:10:23.365 fused_ordering(556) 00:10:23.365 fused_ordering(557) 00:10:23.365 fused_ordering(558) 00:10:23.365 fused_ordering(559) 00:10:23.365 fused_ordering(560) 00:10:23.365 fused_ordering(561) 00:10:23.365 fused_ordering(562) 00:10:23.365 fused_ordering(563) 00:10:23.365 fused_ordering(564) 00:10:23.365 fused_ordering(565) 00:10:23.365 fused_ordering(566) 00:10:23.365 fused_ordering(567) 00:10:23.365 fused_ordering(568) 00:10:23.365 fused_ordering(569) 00:10:23.365 fused_ordering(570) 00:10:23.365 fused_ordering(571) 00:10:23.365 fused_ordering(572) 00:10:23.365 fused_ordering(573) 00:10:23.365 fused_ordering(574) 00:10:23.365 fused_ordering(575) 00:10:23.365 fused_ordering(576) 00:10:23.365 fused_ordering(577) 00:10:23.365 fused_ordering(578) 00:10:23.365 fused_ordering(579) 00:10:23.365 fused_ordering(580) 00:10:23.365 fused_ordering(581) 00:10:23.365 fused_ordering(582) 00:10:23.365 fused_ordering(583) 00:10:23.365 fused_ordering(584) 00:10:23.365 fused_ordering(585) 00:10:23.365 fused_ordering(586) 00:10:23.365 fused_ordering(587) 00:10:23.365 fused_ordering(588) 00:10:23.365 fused_ordering(589) 00:10:23.365 fused_ordering(590) 00:10:23.365 fused_ordering(591) 00:10:23.365 fused_ordering(592) 00:10:23.365 fused_ordering(593) 00:10:23.365 fused_ordering(594) 00:10:23.365 fused_ordering(595) 00:10:23.365 fused_ordering(596) 00:10:23.365 fused_ordering(597) 00:10:23.365 fused_ordering(598) 00:10:23.365 fused_ordering(599) 00:10:23.365 fused_ordering(600) 00:10:23.365 fused_ordering(601) 00:10:23.365 fused_ordering(602) 00:10:23.365 fused_ordering(603) 00:10:23.365 fused_ordering(604) 00:10:23.365 fused_ordering(605) 00:10:23.365 fused_ordering(606) 00:10:23.365 fused_ordering(607) 00:10:23.365 fused_ordering(608) 00:10:23.365 fused_ordering(609) 00:10:23.365 fused_ordering(610) 00:10:23.365 fused_ordering(611) 00:10:23.365 fused_ordering(612) 00:10:23.365 fused_ordering(613) 00:10:23.365 fused_ordering(614) 00:10:23.365 fused_ordering(615) 00:10:23.931 fused_ordering(616) 00:10:23.931 fused_ordering(617) 00:10:23.931 fused_ordering(618) 00:10:23.931 fused_ordering(619) 00:10:23.931 fused_ordering(620) 00:10:23.931 fused_ordering(621) 00:10:23.931 fused_ordering(622) 00:10:23.931 fused_ordering(623) 00:10:23.931 fused_ordering(624) 00:10:23.931 fused_ordering(625) 00:10:23.931 fused_ordering(626) 00:10:23.931 fused_ordering(627) 00:10:23.931 fused_ordering(628) 00:10:23.931 fused_ordering(629) 00:10:23.931 fused_ordering(630) 00:10:23.931 fused_ordering(631) 00:10:23.931 fused_ordering(632) 00:10:23.931 fused_ordering(633) 00:10:23.931 fused_ordering(634) 00:10:23.931 fused_ordering(635) 00:10:23.931 fused_ordering(636) 00:10:23.931 fused_ordering(637) 00:10:23.931 fused_ordering(638) 00:10:23.931 fused_ordering(639) 00:10:23.931 fused_ordering(640) 00:10:23.931 fused_ordering(641) 00:10:23.931 fused_ordering(642) 00:10:23.931 fused_ordering(643) 00:10:23.931 fused_ordering(644) 00:10:23.931 fused_ordering(645) 00:10:23.931 fused_ordering(646) 00:10:23.931 fused_ordering(647) 00:10:23.931 fused_ordering(648) 00:10:23.931 fused_ordering(649) 00:10:23.931 fused_ordering(650) 00:10:23.931 fused_ordering(651) 00:10:23.931 fused_ordering(652) 00:10:23.931 fused_ordering(653) 00:10:23.931 fused_ordering(654) 00:10:23.931 fused_ordering(655) 00:10:23.931 fused_ordering(656) 00:10:23.931 fused_ordering(657) 00:10:23.931 fused_ordering(658) 00:10:23.931 fused_ordering(659) 00:10:23.931 fused_ordering(660) 00:10:23.931 fused_ordering(661) 00:10:23.931 fused_ordering(662) 00:10:23.931 fused_ordering(663) 00:10:23.931 fused_ordering(664) 00:10:23.931 fused_ordering(665) 00:10:23.931 fused_ordering(666) 00:10:23.931 fused_ordering(667) 00:10:23.931 fused_ordering(668) 00:10:23.931 fused_ordering(669) 00:10:23.931 fused_ordering(670) 00:10:23.931 fused_ordering(671) 00:10:23.931 fused_ordering(672) 00:10:23.931 fused_ordering(673) 00:10:23.931 fused_ordering(674) 00:10:23.931 fused_ordering(675) 00:10:23.931 fused_ordering(676) 00:10:23.931 fused_ordering(677) 00:10:23.931 fused_ordering(678) 00:10:23.931 fused_ordering(679) 00:10:23.931 fused_ordering(680) 00:10:23.931 fused_ordering(681) 00:10:23.931 fused_ordering(682) 00:10:23.931 fused_ordering(683) 00:10:23.931 fused_ordering(684) 00:10:23.931 fused_ordering(685) 00:10:23.931 fused_ordering(686) 00:10:23.931 fused_ordering(687) 00:10:23.931 fused_ordering(688) 00:10:23.931 fused_ordering(689) 00:10:23.931 fused_ordering(690) 00:10:23.931 fused_ordering(691) 00:10:23.931 fused_ordering(692) 00:10:23.931 fused_ordering(693) 00:10:23.931 fused_ordering(694) 00:10:23.931 fused_ordering(695) 00:10:23.931 fused_ordering(696) 00:10:23.931 fused_ordering(697) 00:10:23.931 fused_ordering(698) 00:10:23.931 fused_ordering(699) 00:10:23.931 fused_ordering(700) 00:10:23.931 fused_ordering(701) 00:10:23.931 fused_ordering(702) 00:10:23.931 fused_ordering(703) 00:10:23.931 fused_ordering(704) 00:10:23.931 fused_ordering(705) 00:10:23.931 fused_ordering(706) 00:10:23.931 fused_ordering(707) 00:10:23.931 fused_ordering(708) 00:10:23.931 fused_ordering(709) 00:10:23.931 fused_ordering(710) 00:10:23.931 fused_ordering(711) 00:10:23.931 fused_ordering(712) 00:10:23.931 fused_ordering(713) 00:10:23.932 fused_ordering(714) 00:10:23.932 fused_ordering(715) 00:10:23.932 fused_ordering(716) 00:10:23.932 fused_ordering(717) 00:10:23.932 fused_ordering(718) 00:10:23.932 fused_ordering(719) 00:10:23.932 fused_ordering(720) 00:10:23.932 fused_ordering(721) 00:10:23.932 fused_ordering(722) 00:10:23.932 fused_ordering(723) 00:10:23.932 fused_ordering(724) 00:10:23.932 fused_ordering(725) 00:10:23.932 fused_ordering(726) 00:10:23.932 fused_ordering(727) 00:10:23.932 fused_ordering(728) 00:10:23.932 fused_ordering(729) 00:10:23.932 fused_ordering(730) 00:10:23.932 fused_ordering(731) 00:10:23.932 fused_ordering(732) 00:10:23.932 fused_ordering(733) 00:10:23.932 fused_ordering(734) 00:10:23.932 fused_ordering(735) 00:10:23.932 fused_ordering(736) 00:10:23.932 fused_ordering(737) 00:10:23.932 fused_ordering(738) 00:10:23.932 fused_ordering(739) 00:10:23.932 fused_ordering(740) 00:10:23.932 fused_ordering(741) 00:10:23.932 fused_ordering(742) 00:10:23.932 fused_ordering(743) 00:10:23.932 fused_ordering(744) 00:10:23.932 fused_ordering(745) 00:10:23.932 fused_ordering(746) 00:10:23.932 fused_ordering(747) 00:10:23.932 fused_ordering(748) 00:10:23.932 fused_ordering(749) 00:10:23.932 fused_ordering(750) 00:10:23.932 fused_ordering(751) 00:10:23.932 fused_ordering(752) 00:10:23.932 fused_ordering(753) 00:10:23.932 fused_ordering(754) 00:10:23.932 fused_ordering(755) 00:10:23.932 fused_ordering(756) 00:10:23.932 fused_ordering(757) 00:10:23.932 fused_ordering(758) 00:10:23.932 fused_ordering(759) 00:10:23.932 fused_ordering(760) 00:10:23.932 fused_ordering(761) 00:10:23.932 fused_ordering(762) 00:10:23.932 fused_ordering(763) 00:10:23.932 fused_ordering(764) 00:10:23.932 fused_ordering(765) 00:10:23.932 fused_ordering(766) 00:10:23.932 fused_ordering(767) 00:10:23.932 fused_ordering(768) 00:10:23.932 fused_ordering(769) 00:10:23.932 fused_ordering(770) 00:10:23.932 fused_ordering(771) 00:10:23.932 fused_ordering(772) 00:10:23.932 fused_ordering(773) 00:10:23.932 fused_ordering(774) 00:10:23.932 fused_ordering(775) 00:10:23.932 fused_ordering(776) 00:10:23.932 fused_ordering(777) 00:10:23.932 fused_ordering(778) 00:10:23.932 fused_ordering(779) 00:10:23.932 fused_ordering(780) 00:10:23.932 fused_ordering(781) 00:10:23.932 fused_ordering(782) 00:10:23.932 fused_ordering(783) 00:10:23.932 fused_ordering(784) 00:10:23.932 fused_ordering(785) 00:10:23.932 fused_ordering(786) 00:10:23.932 fused_ordering(787) 00:10:23.932 fused_ordering(788) 00:10:23.932 fused_ordering(789) 00:10:23.932 fused_ordering(790) 00:10:23.932 fused_ordering(791) 00:10:23.932 fused_ordering(792) 00:10:23.932 fused_ordering(793) 00:10:23.932 fused_ordering(794) 00:10:23.932 fused_ordering(795) 00:10:23.932 fused_ordering(796) 00:10:23.932 fused_ordering(797) 00:10:23.932 fused_ordering(798) 00:10:23.932 fused_ordering(799) 00:10:23.932 fused_ordering(800) 00:10:23.932 fused_ordering(801) 00:10:23.932 fused_ordering(802) 00:10:23.932 fused_ordering(803) 00:10:23.932 fused_ordering(804) 00:10:23.932 fused_ordering(805) 00:10:23.932 fused_ordering(806) 00:10:23.932 fused_ordering(807) 00:10:23.932 fused_ordering(808) 00:10:23.932 fused_ordering(809) 00:10:23.932 fused_ordering(810) 00:10:23.932 fused_ordering(811) 00:10:23.932 fused_ordering(812) 00:10:23.932 fused_ordering(813) 00:10:23.932 fused_ordering(814) 00:10:23.932 fused_ordering(815) 00:10:23.932 fused_ordering(816) 00:10:23.932 fused_ordering(817) 00:10:23.932 fused_ordering(818) 00:10:23.932 fused_ordering(819) 00:10:23.932 fused_ordering(820) 00:10:24.867 fused_ordering(821) 00:10:24.867 fused_ordering(822) 00:10:24.867 fused_ordering(823) 00:10:24.867 fused_ordering(824) 00:10:24.867 fused_ordering(825) 00:10:24.867 fused_ordering(826) 00:10:24.867 fused_ordering(827) 00:10:24.867 fused_ordering(828) 00:10:24.867 fused_ordering(829) 00:10:24.867 fused_ordering(830) 00:10:24.867 fused_ordering(831) 00:10:24.867 fused_ordering(832) 00:10:24.867 fused_ordering(833) 00:10:24.867 fused_ordering(834) 00:10:24.867 fused_ordering(835) 00:10:24.867 fused_ordering(836) 00:10:24.867 fused_ordering(837) 00:10:24.867 fused_ordering(838) 00:10:24.867 fused_ordering(839) 00:10:24.867 fused_ordering(840) 00:10:24.867 fused_ordering(841) 00:10:24.867 fused_ordering(842) 00:10:24.867 fused_ordering(843) 00:10:24.867 fused_ordering(844) 00:10:24.867 fused_ordering(845) 00:10:24.867 fused_ordering(846) 00:10:24.867 fused_ordering(847) 00:10:24.867 fused_ordering(848) 00:10:24.867 fused_ordering(849) 00:10:24.867 fused_ordering(850) 00:10:24.867 fused_ordering(851) 00:10:24.867 fused_ordering(852) 00:10:24.867 fused_ordering(853) 00:10:24.867 fused_ordering(854) 00:10:24.867 fused_ordering(855) 00:10:24.867 fused_ordering(856) 00:10:24.867 fused_ordering(857) 00:10:24.867 fused_ordering(858) 00:10:24.867 fused_ordering(859) 00:10:24.867 fused_ordering(860) 00:10:24.867 fused_ordering(861) 00:10:24.867 fused_ordering(862) 00:10:24.867 fused_ordering(863) 00:10:24.867 fused_ordering(864) 00:10:24.867 fused_ordering(865) 00:10:24.867 fused_ordering(866) 00:10:24.867 fused_ordering(867) 00:10:24.867 fused_ordering(868) 00:10:24.867 fused_ordering(869) 00:10:24.867 fused_ordering(870) 00:10:24.867 fused_ordering(871) 00:10:24.867 fused_ordering(872) 00:10:24.867 fused_ordering(873) 00:10:24.867 fused_ordering(874) 00:10:24.867 fused_ordering(875) 00:10:24.867 fused_ordering(876) 00:10:24.867 fused_ordering(877) 00:10:24.867 fused_ordering(878) 00:10:24.867 fused_ordering(879) 00:10:24.867 fused_ordering(880) 00:10:24.867 fused_ordering(881) 00:10:24.867 fused_ordering(882) 00:10:24.867 fused_ordering(883) 00:10:24.867 fused_ordering(884) 00:10:24.867 fused_ordering(885) 00:10:24.867 fused_ordering(886) 00:10:24.867 fused_ordering(887) 00:10:24.867 fused_ordering(888) 00:10:24.867 fused_ordering(889) 00:10:24.867 fused_ordering(890) 00:10:24.867 fused_ordering(891) 00:10:24.867 fused_ordering(892) 00:10:24.867 fused_ordering(893) 00:10:24.867 fused_ordering(894) 00:10:24.867 fused_ordering(895) 00:10:24.867 fused_ordering(896) 00:10:24.867 fused_ordering(897) 00:10:24.867 fused_ordering(898) 00:10:24.867 fused_ordering(899) 00:10:24.867 fused_ordering(900) 00:10:24.867 fused_ordering(901) 00:10:24.867 fused_ordering(902) 00:10:24.867 fused_ordering(903) 00:10:24.867 fused_ordering(904) 00:10:24.867 fused_ordering(905) 00:10:24.867 fused_ordering(906) 00:10:24.867 fused_ordering(907) 00:10:24.868 fused_ordering(908) 00:10:24.868 fused_ordering(909) 00:10:24.868 fused_ordering(910) 00:10:24.868 fused_ordering(911) 00:10:24.868 fused_ordering(912) 00:10:24.868 fused_ordering(913) 00:10:24.868 fused_ordering(914) 00:10:24.868 fused_ordering(915) 00:10:24.868 fused_ordering(916) 00:10:24.868 fused_ordering(917) 00:10:24.868 fused_ordering(918) 00:10:24.868 fused_ordering(919) 00:10:24.868 fused_ordering(920) 00:10:24.868 fused_ordering(921) 00:10:24.868 fused_ordering(922) 00:10:24.868 fused_ordering(923) 00:10:24.868 fused_ordering(924) 00:10:24.868 fused_ordering(925) 00:10:24.868 fused_ordering(926) 00:10:24.868 fused_ordering(927) 00:10:24.868 fused_ordering(928) 00:10:24.868 fused_ordering(929) 00:10:24.868 fused_ordering(930) 00:10:24.868 fused_ordering(931) 00:10:24.868 fused_ordering(932) 00:10:24.868 fused_ordering(933) 00:10:24.868 fused_ordering(934) 00:10:24.868 fused_ordering(935) 00:10:24.868 fused_ordering(936) 00:10:24.868 fused_ordering(937) 00:10:24.868 fused_ordering(938) 00:10:24.868 fused_ordering(939) 00:10:24.868 fused_ordering(940) 00:10:24.868 fused_ordering(941) 00:10:24.868 fused_ordering(942) 00:10:24.868 fused_ordering(943) 00:10:24.868 fused_ordering(944) 00:10:24.868 fused_ordering(945) 00:10:24.868 fused_ordering(946) 00:10:24.868 fused_ordering(947) 00:10:24.868 fused_ordering(948) 00:10:24.868 fused_ordering(949) 00:10:24.868 fused_ordering(950) 00:10:24.868 fused_ordering(951) 00:10:24.868 fused_ordering(952) 00:10:24.868 fused_ordering(953) 00:10:24.868 fused_ordering(954) 00:10:24.868 fused_ordering(955) 00:10:24.868 fused_ordering(956) 00:10:24.868 fused_ordering(957) 00:10:24.868 fused_ordering(958) 00:10:24.868 fused_ordering(959) 00:10:24.868 fused_ordering(960) 00:10:24.868 fused_ordering(961) 00:10:24.868 fused_ordering(962) 00:10:24.868 fused_ordering(963) 00:10:24.868 fused_ordering(964) 00:10:24.868 fused_ordering(965) 00:10:24.868 fused_ordering(966) 00:10:24.868 fused_ordering(967) 00:10:24.868 fused_ordering(968) 00:10:24.868 fused_ordering(969) 00:10:24.868 fused_ordering(970) 00:10:24.868 fused_ordering(971) 00:10:24.868 fused_ordering(972) 00:10:24.868 fused_ordering(973) 00:10:24.868 fused_ordering(974) 00:10:24.868 fused_ordering(975) 00:10:24.868 fused_ordering(976) 00:10:24.868 fused_ordering(977) 00:10:24.868 fused_ordering(978) 00:10:24.868 fused_ordering(979) 00:10:24.868 fused_ordering(980) 00:10:24.868 fused_ordering(981) 00:10:24.868 fused_ordering(982) 00:10:24.868 fused_ordering(983) 00:10:24.868 fused_ordering(984) 00:10:24.868 fused_ordering(985) 00:10:24.868 fused_ordering(986) 00:10:24.868 fused_ordering(987) 00:10:24.868 fused_ordering(988) 00:10:24.868 fused_ordering(989) 00:10:24.868 fused_ordering(990) 00:10:24.868 fused_ordering(991) 00:10:24.868 fused_ordering(992) 00:10:24.868 fused_ordering(993) 00:10:24.868 fused_ordering(994) 00:10:24.868 fused_ordering(995) 00:10:24.868 fused_ordering(996) 00:10:24.868 fused_ordering(997) 00:10:24.868 fused_ordering(998) 00:10:24.868 fused_ordering(999) 00:10:24.868 fused_ordering(1000) 00:10:24.868 fused_ordering(1001) 00:10:24.868 fused_ordering(1002) 00:10:24.868 fused_ordering(1003) 00:10:24.868 fused_ordering(1004) 00:10:24.868 fused_ordering(1005) 00:10:24.868 fused_ordering(1006) 00:10:24.868 fused_ordering(1007) 00:10:24.868 fused_ordering(1008) 00:10:24.868 fused_ordering(1009) 00:10:24.868 fused_ordering(1010) 00:10:24.868 fused_ordering(1011) 00:10:24.868 fused_ordering(1012) 00:10:24.868 fused_ordering(1013) 00:10:24.868 fused_ordering(1014) 00:10:24.868 fused_ordering(1015) 00:10:24.868 fused_ordering(1016) 00:10:24.868 fused_ordering(1017) 00:10:24.868 fused_ordering(1018) 00:10:24.868 fused_ordering(1019) 00:10:24.868 fused_ordering(1020) 00:10:24.868 fused_ordering(1021) 00:10:24.868 fused_ordering(1022) 00:10:24.868 fused_ordering(1023) 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:24.868 rmmod nvme_tcp 00:10:24.868 rmmod nvme_fabrics 00:10:24.868 rmmod nvme_keyring 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 690895 ']' 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 690895 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 690895 ']' 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 690895 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 690895 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 690895' 00:10:24.868 killing process with pid 690895 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 690895 00:10:24.868 15:47:21 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 690895 00:10:24.868 15:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:24.868 15:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:24.868 15:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:24.868 15:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:24.868 15:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:24.868 15:47:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.868 15:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.868 15:47:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.405 15:47:24 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:27.405 00:10:27.405 real 0m7.642s 00:10:27.405 user 0m4.951s 00:10:27.405 sys 0m3.500s 00:10:27.405 15:47:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:27.405 15:47:24 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:10:27.405 ************************************ 00:10:27.405 END TEST nvmf_fused_ordering 00:10:27.405 ************************************ 00:10:27.405 15:47:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:27.405 15:47:24 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:27.405 15:47:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:27.405 15:47:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.405 15:47:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:27.405 ************************************ 00:10:27.405 START TEST nvmf_delete_subsystem 00:10:27.405 ************************************ 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:27.405 * Looking for test storage... 00:10:27.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:27.405 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:27.406 15:47:24 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:29.306 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:29.306 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:29.306 Found net devices under 0000:84:00.0: cvl_0_0 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:29.306 Found net devices under 0000:84:00.1: cvl_0_1 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.306 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:29.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:29.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:10:29.307 00:10:29.307 --- 10.0.0.2 ping statistics --- 00:10:29.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.307 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:29.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:29.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:10:29.307 00:10:29.307 --- 10.0.0.1 ping statistics --- 00:10:29.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:29.307 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:29.307 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.565 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=693259 00:10:29.565 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:29.565 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 693259 00:10:29.565 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 693259 ']' 00:10:29.565 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.565 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:29.565 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.565 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:29.565 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.565 [2024-07-12 15:47:26.649574] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:10:29.565 [2024-07-12 15:47:26.649656] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:29.565 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.565 [2024-07-12 15:47:26.711833] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:29.565 [2024-07-12 15:47:26.811545] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.565 [2024-07-12 15:47:26.811603] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.565 [2024-07-12 15:47:26.811630] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.565 [2024-07-12 15:47:26.811641] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.565 [2024-07-12 15:47:26.811650] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.565 [2024-07-12 15:47:26.811767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.565 [2024-07-12 15:47:26.811770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.823 [2024-07-12 15:47:26.955149] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.823 [2024-07-12 15:47:26.971346] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.823 NULL1 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.823 Delay0 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=693287 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:29.823 15:47:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:29.823 EAL: No free 2048 kB hugepages reported on node 1 00:10:29.823 [2024-07-12 15:47:27.046001] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:31.717 15:47:29 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.717 15:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.717 15:47:29 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:31.975 Write completed with error (sct=0, sc=8) 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 starting I/O failed: -6 00:10:31.975 Write completed with error (sct=0, sc=8) 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 starting I/O failed: -6 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 Write completed with error (sct=0, sc=8) 00:10:31.975 Write completed with error (sct=0, sc=8) 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 starting I/O failed: -6 00:10:31.975 Write completed with error (sct=0, sc=8) 00:10:31.975 Write completed with error (sct=0, sc=8) 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 starting I/O failed: -6 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 Write completed with error (sct=0, sc=8) 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 starting I/O failed: -6 00:10:31.975 Write completed with error (sct=0, sc=8) 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.975 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 [2024-07-12 15:47:29.155825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f226800d4d0 is same with the state(5) to be set 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 starting I/O failed: -6 00:10:31.976 Read completed with error (sct=0, sc=8) 00:10:31.976 Write completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Write completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Write completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Write completed with error (sct=0, sc=8) 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Write completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Write completed with error (sct=0, sc=8) 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Write completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 Read completed with error (sct=0, sc=8) 00:10:31.977 starting I/O failed: -6 00:10:32.908 [2024-07-12 15:47:30.060115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f6a70 is same with the state(5) to be set 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 [2024-07-12 15:47:30.160233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f226800d820 is same with the state(5) to be set 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 [2024-07-12 15:47:30.160629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2268000c00 is same with the state(5) to be set 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 [2024-07-12 15:47:30.160843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f226800d020 is same with the state(5) to be set 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Read completed with error (sct=0, sc=8) 00:10:32.908 Write completed with error (sct=0, sc=8) 00:10:32.908 [2024-07-12 15:47:30.162477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f5af0 is same with the state(5) to be set 00:10:32.908 Initializing NVMe Controllers 00:10:32.908 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:32.908 Controller IO queue size 128, less than required. 00:10:32.908 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:32.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:32.908 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:32.908 Initialization complete. Launching workers. 00:10:32.908 ======================================================== 00:10:32.908 Latency(us) 00:10:32.908 Device Information : IOPS MiB/s Average min max 00:10:32.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.30 0.08 860100.95 433.49 1071681.45 00:10:32.908 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.87 0.08 987361.35 2373.51 1069287.92 00:10:32.908 ======================================================== 00:10:32.908 Total : 330.17 0.16 920948.49 433.49 1071681.45 00:10:32.908 00:10:32.908 [2024-07-12 15:47:30.162967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f6a70 (9): Bad file descriptor 00:10:32.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:32.908 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.908 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:32.908 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 693287 00:10:32.908 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:33.470 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:33.470 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 693287 00:10:33.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (693287) - No such process 00:10:33.470 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 693287 00:10:33.470 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:10:33.470 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 693287 00:10:33.470 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:10:33.470 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.470 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:10:33.470 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:33.470 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 693287 00:10:33.470 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:10:33.470 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.471 [2024-07-12 15:47:30.684950] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=693692 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 693692 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:33.471 15:47:30 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:33.471 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.471 [2024-07-12 15:47:30.748625] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:34.035 15:47:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:34.035 15:47:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 693692 00:10:34.035 15:47:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:34.623 15:47:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:34.623 15:47:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 693692 00:10:34.623 15:47:31 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:35.188 15:47:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:35.188 15:47:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 693692 00:10:35.188 15:47:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:35.446 15:47:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:35.446 15:47:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 693692 00:10:35.446 15:47:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:36.009 15:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:36.009 15:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 693692 00:10:36.009 15:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:36.572 15:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:36.572 15:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 693692 00:10:36.572 15:47:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:36.830 Initializing NVMe Controllers 00:10:36.830 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:36.830 Controller IO queue size 128, less than required. 00:10:36.830 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:36.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:36.830 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:36.830 Initialization complete. Launching workers. 00:10:36.830 ======================================================== 00:10:36.830 Latency(us) 00:10:36.830 Device Information : IOPS MiB/s Average min max 00:10:36.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1013847.54 1000204.82 1068600.38 00:10:36.830 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1014651.05 1000200.33 1070051.60 00:10:36.830 ======================================================== 00:10:36.830 Total : 256.00 0.12 1014249.29 1000200.33 1070051.60 00:10:36.830 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 693692 00:10:37.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (693692) - No such process 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 693692 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:37.087 rmmod nvme_tcp 00:10:37.087 rmmod nvme_fabrics 00:10:37.087 rmmod nvme_keyring 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 693259 ']' 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 693259 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 693259 ']' 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 693259 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 693259 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 693259' 00:10:37.087 killing process with pid 693259 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 693259 00:10:37.087 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 693259 00:10:37.346 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:37.346 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:37.346 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:37.346 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.346 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:37.346 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.346 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:37.346 15:47:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.878 15:47:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:39.878 00:10:39.878 real 0m12.358s 00:10:39.878 user 0m27.631s 00:10:39.878 sys 0m3.143s 00:10:39.878 15:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.878 15:47:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.878 ************************************ 00:10:39.878 END TEST nvmf_delete_subsystem 00:10:39.878 ************************************ 00:10:39.878 15:47:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:39.878 15:47:36 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:10:39.878 15:47:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:39.878 15:47:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.878 15:47:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:39.878 ************************************ 00:10:39.878 START TEST nvmf_ns_masking 00:10:39.878 ************************************ 00:10:39.878 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:10:39.878 * Looking for test storage... 00:10:39.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.878 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.878 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:10:39.878 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.878 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.878 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.878 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.878 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.878 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.878 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.878 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.878 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=bc2c25bf-bc10-44a3-af28-2468683059ef 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=725a0f7b-6df7-4cb1-a714-fc6bd96fd2c4 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=73e3d027-8f59-453d-a26c-ece8f19f66a4 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:10:39.879 15:47:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.777 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:41.778 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:41.778 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:41.778 Found net devices under 0000:84:00.0: cvl_0_0 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:41.778 Found net devices under 0000:84:00.1: cvl_0_1 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:41.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:10:41.778 00:10:41.778 --- 10.0.0.2 ping statistics --- 00:10:41.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.778 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:10:41.778 00:10:41.778 --- 10.0.0.1 ping statistics --- 00:10:41.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.778 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=696074 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 696074 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 696074 ']' 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:41.778 15:47:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:41.778 [2024-07-12 15:47:38.943351] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:10:41.778 [2024-07-12 15:47:38.943435] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.778 EAL: No free 2048 kB hugepages reported on node 1 00:10:41.778 [2024-07-12 15:47:39.015085] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.037 [2024-07-12 15:47:39.126217] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:42.037 [2024-07-12 15:47:39.126281] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:42.037 [2024-07-12 15:47:39.126310] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:42.037 [2024-07-12 15:47:39.126321] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:42.037 [2024-07-12 15:47:39.126331] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:42.037 [2024-07-12 15:47:39.126358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.970 15:47:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:42.970 15:47:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:42.970 15:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:42.970 15:47:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:42.970 15:47:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:42.970 15:47:39 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:42.970 15:47:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:42.970 [2024-07-12 15:47:40.234914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:42.970 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:10:42.970 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:10:42.970 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:43.228 Malloc1 00:10:43.485 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:43.743 Malloc2 00:10:43.743 15:47:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.001 15:47:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:10:44.260 15:47:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.518 [2024-07-12 15:47:41.640115] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.518 15:47:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:10:44.518 15:47:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 73e3d027-8f59-453d-a26c-ece8f19f66a4 -a 10.0.0.2 -s 4420 -i 4 00:10:44.518 15:47:41 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.518 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:44.518 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.519 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:44.519 15:47:41 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:47.047 [ 0]:0x1 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce0a108595dd4828b8059d69397ca9a1 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce0a108595dd4828b8059d69397ca9a1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:47.047 15:47:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:47.047 [ 0]:0x1 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce0a108595dd4828b8059d69397ca9a1 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce0a108595dd4828b8059d69397ca9a1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:47.047 [ 1]:0x2 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5212248346ad4c249eb96fd3aa8783aa 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5212248346ad4c249eb96fd3aa8783aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:10:47.047 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.305 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.562 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:10:47.562 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:10:47.562 15:47:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 73e3d027-8f59-453d-a26c-ece8f19f66a4 -a 10.0.0.2 -s 4420 -i 4 00:10:47.820 15:47:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:10:47.820 15:47:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:47.820 15:47:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:47.820 15:47:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:10:47.820 15:47:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:10:47.820 15:47:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:50.369 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:50.369 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:50.369 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.369 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:50.369 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.369 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:50.369 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:50.369 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:50.369 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:50.369 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:50.370 [ 0]:0x2 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5212248346ad4c249eb96fd3aa8783aa 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5212248346ad4c249eb96fd3aa8783aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:50.370 [ 0]:0x1 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce0a108595dd4828b8059d69397ca9a1 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce0a108595dd4828b8059d69397ca9a1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:50.370 [ 1]:0x2 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:50.370 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:50.628 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5212248346ad4c249eb96fd3aa8783aa 00:10:50.628 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5212248346ad4c249eb96fd3aa8783aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.628 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:50.886 [ 0]:0x2 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:50.886 15:47:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:50.886 15:47:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5212248346ad4c249eb96fd3aa8783aa 00:10:50.886 15:47:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5212248346ad4c249eb96fd3aa8783aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:50.886 15:47:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:10:50.886 15:47:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:50.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.886 15:47:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:51.144 15:47:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:10:51.144 15:47:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 73e3d027-8f59-453d-a26c-ece8f19f66a4 -a 10.0.0.2 -s 4420 -i 4 00:10:51.402 15:47:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:10:51.402 15:47:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:10:51.402 15:47:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.402 15:47:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:10:51.402 15:47:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:10:51.402 15:47:48 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:10:53.301 15:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:53.301 15:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:53.301 15:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.301 15:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:10:53.301 15:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.301 15:47:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:10:53.301 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:10:53.301 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:10:53.301 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:10:53.301 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:10:53.301 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:10:53.558 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:53.558 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:53.558 [ 0]:0x1 00:10:53.558 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:53.558 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:53.558 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ce0a108595dd4828b8059d69397ca9a1 00:10:53.559 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ce0a108595dd4828b8059d69397ca9a1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:53.559 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:10:53.559 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:53.559 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:53.559 [ 1]:0x2 00:10:53.559 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:53.559 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:53.559 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5212248346ad4c249eb96fd3aa8783aa 00:10:53.559 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5212248346ad4c249eb96fd3aa8783aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:53.559 15:47:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:53.817 [ 0]:0x2 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:53.817 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5212248346ad4c249eb96fd3aa8783aa 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5212248346ad4c249eb96fd3aa8783aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:54.075 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:10:54.332 [2024-07-12 15:47:51.396747] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:10:54.332 request: 00:10:54.332 { 00:10:54.332 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:54.332 "nsid": 2, 00:10:54.332 "host": "nqn.2016-06.io.spdk:host1", 00:10:54.332 "method": "nvmf_ns_remove_host", 00:10:54.332 "req_id": 1 00:10:54.332 } 00:10:54.332 Got JSON-RPC error response 00:10:54.332 response: 00:10:54.332 { 00:10:54.332 "code": -32602, 00:10:54.332 "message": "Invalid parameters" 00:10:54.332 } 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:54.332 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:10:54.333 [ 0]:0x2 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5212248346ad4c249eb96fd3aa8783aa 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5212248346ad4c249eb96fd3aa8783aa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:10:54.333 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.591 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=697807 00:10:54.591 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:10:54.591 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.591 15:47:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 697807 /var/tmp/host.sock 00:10:54.591 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 697807 ']' 00:10:54.591 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:10:54.591 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.591 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:10:54.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:10:54.591 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.591 15:47:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:10:54.591 [2024-07-12 15:47:51.737116] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:10:54.591 [2024-07-12 15:47:51.737207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid697807 ] 00:10:54.591 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.591 [2024-07-12 15:47:51.803387] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.849 [2024-07-12 15:47:51.913251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.849 15:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:54.849 15:47:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:10:54.849 15:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.106 15:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:55.364 15:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid bc2c25bf-bc10-44a3-af28-2468683059ef 00:10:55.364 15:47:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:55.364 15:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BC2C25BFBC1044A3AF282468683059EF -i 00:10:55.621 15:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 725a0f7b-6df7-4cb1-a714-fc6bd96fd2c4 00:10:55.621 15:47:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:10:55.621 15:47:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 725A0F7B6DF74CB1A714FC6BD96FD2C4 -i 00:10:55.898 15:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:10:56.155 15:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:10:56.413 15:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:56.413 15:47:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:10:56.978 nvme0n1 00:10:56.978 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:56.978 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:10:57.236 nvme1n2 00:10:57.236 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:10:57.236 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:10:57.236 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:10:57.236 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:10:57.236 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:10:57.493 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:10:57.493 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:10:57.493 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:10:57.493 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:10:57.750 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ bc2c25bf-bc10-44a3-af28-2468683059ef == \b\c\2\c\2\5\b\f\-\b\c\1\0\-\4\4\a\3\-\a\f\2\8\-\2\4\6\8\6\8\3\0\5\9\e\f ]] 00:10:57.750 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:10:57.750 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:10:57.750 15:47:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:10:58.008 15:47:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 725a0f7b-6df7-4cb1-a714-fc6bd96fd2c4 == \7\2\5\a\0\f\7\b\-\6\d\f\7\-\4\c\b\1\-\a\7\1\4\-\f\c\6\b\d\9\6\f\d\2\c\4 ]] 00:10:58.008 15:47:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 697807 00:10:58.008 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 697807 ']' 00:10:58.008 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 697807 00:10:58.008 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:58.008 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:58.008 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 697807 00:10:58.008 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:58.008 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:58.008 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 697807' 00:10:58.008 killing process with pid 697807 00:10:58.008 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 697807 00:10:58.008 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 697807 00:10:58.265 15:47:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:58.830 rmmod nvme_tcp 00:10:58.830 rmmod nvme_fabrics 00:10:58.830 rmmod nvme_keyring 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 696074 ']' 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 696074 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 696074 ']' 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 696074 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 696074 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 696074' 00:10:58.830 killing process with pid 696074 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 696074 00:10:58.830 15:47:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 696074 00:10:59.089 15:47:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:59.089 15:47:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:59.089 15:47:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:59.089 15:47:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:59.089 15:47:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:59.089 15:47:56 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.089 15:47:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:59.089 15:47:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.996 15:47:58 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:00.996 00:11:00.996 real 0m21.616s 00:11:00.996 user 0m27.947s 00:11:00.996 sys 0m4.092s 00:11:00.996 15:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.996 15:47:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:00.996 ************************************ 00:11:00.996 END TEST nvmf_ns_masking 00:11:00.996 ************************************ 00:11:00.996 15:47:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:00.996 15:47:58 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:00.996 15:47:58 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:00.996 15:47:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:00.996 15:47:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.996 15:47:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:01.254 ************************************ 00:11:01.254 START TEST nvmf_nvme_cli 00:11:01.254 ************************************ 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:01.254 * Looking for test storage... 00:11:01.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:01.254 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:01.255 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:01.255 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:01.255 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:01.255 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:01.255 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.255 15:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.255 15:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:01.255 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:01.255 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:01.255 15:47:58 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:01.255 15:47:58 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.157 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:03.158 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:03.158 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:03.158 Found net devices under 0000:84:00.0: cvl_0_0 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:03.158 Found net devices under 0000:84:00.1: cvl_0_1 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.158 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:03.416 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:03.416 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.416 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.416 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.416 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.416 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:03.416 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.416 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:03.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:11:03.417 00:11:03.417 --- 10.0.0.2 ping statistics --- 00:11:03.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.417 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:11:03.417 00:11:03.417 --- 10.0.0.1 ping statistics --- 00:11:03.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.417 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=700350 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 700350 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 700350 ']' 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:03.417 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.417 [2024-07-12 15:48:00.657979] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:11:03.417 [2024-07-12 15:48:00.658091] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.417 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.675 [2024-07-12 15:48:00.724819] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.675 [2024-07-12 15:48:00.838824] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.675 [2024-07-12 15:48:00.838877] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.675 [2024-07-12 15:48:00.838890] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.675 [2024-07-12 15:48:00.838902] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.675 [2024-07-12 15:48:00.838911] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.675 [2024-07-12 15:48:00.838967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.675 [2024-07-12 15:48:00.839026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.675 [2024-07-12 15:48:00.839093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.675 [2024-07-12 15:48:00.839096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.933 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:03.933 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:03.933 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:03.933 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:03.934 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.934 15:48:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.934 15:48:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.934 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.934 15:48:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.934 [2024-07-12 15:48:01.002409] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.934 Malloc0 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.934 Malloc1 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.934 [2024-07-12 15:48:01.083546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:11:03.934 00:11:03.934 Discovery Log Number of Records 2, Generation counter 2 00:11:03.934 =====Discovery Log Entry 0====== 00:11:03.934 trtype: tcp 00:11:03.934 adrfam: ipv4 00:11:03.934 subtype: current discovery subsystem 00:11:03.934 treq: not required 00:11:03.934 portid: 0 00:11:03.934 trsvcid: 4420 00:11:03.934 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:03.934 traddr: 10.0.0.2 00:11:03.934 eflags: explicit discovery connections, duplicate discovery information 00:11:03.934 sectype: none 00:11:03.934 =====Discovery Log Entry 1====== 00:11:03.934 trtype: tcp 00:11:03.934 adrfam: ipv4 00:11:03.934 subtype: nvme subsystem 00:11:03.934 treq: not required 00:11:03.934 portid: 0 00:11:03.934 trsvcid: 4420 00:11:03.934 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:03.934 traddr: 10.0.0.2 00:11:03.934 eflags: none 00:11:03.934 sectype: none 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:03.934 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:04.865 15:48:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:04.865 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:11:04.865 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.865 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:11:04.865 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:11:04.865 15:48:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:06.762 /dev/nvme0n1 ]] 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:06.762 rmmod nvme_tcp 00:11:06.762 rmmod nvme_fabrics 00:11:06.762 rmmod nvme_keyring 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 700350 ']' 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 700350 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 700350 ']' 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 700350 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:11:06.762 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:06.763 15:48:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 700350 00:11:06.763 15:48:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:06.763 15:48:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:06.763 15:48:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 700350' 00:11:06.763 killing process with pid 700350 00:11:06.763 15:48:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 700350 00:11:06.763 15:48:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 700350 00:11:07.329 15:48:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:07.329 15:48:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:07.329 15:48:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:07.329 15:48:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.329 15:48:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:07.329 15:48:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.329 15:48:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.329 15:48:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.232 15:48:06 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:09.232 00:11:09.232 real 0m8.089s 00:11:09.232 user 0m14.248s 00:11:09.232 sys 0m2.325s 00:11:09.232 15:48:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.232 15:48:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:09.232 ************************************ 00:11:09.232 END TEST nvmf_nvme_cli 00:11:09.232 ************************************ 00:11:09.233 15:48:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:09.233 15:48:06 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:11:09.233 15:48:06 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:09.233 15:48:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.233 15:48:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.233 15:48:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:09.233 ************************************ 00:11:09.233 START TEST nvmf_vfio_user 00:11:09.233 ************************************ 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:09.233 * Looking for test storage... 00:11:09.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=701248 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 701248' 00:11:09.233 Process pid: 701248 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 701248 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 701248 ']' 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.233 15:48:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:09.491 [2024-07-12 15:48:06.563281] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:11:09.491 [2024-07-12 15:48:06.563360] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.491 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.491 [2024-07-12 15:48:06.620934] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.491 [2024-07-12 15:48:06.727932] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.491 [2024-07-12 15:48:06.727973] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.491 [2024-07-12 15:48:06.727988] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.491 [2024-07-12 15:48:06.728001] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.491 [2024-07-12 15:48:06.728012] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.491 [2024-07-12 15:48:06.728080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.491 [2024-07-12 15:48:06.728190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.491 [2024-07-12 15:48:06.728214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.491 [2024-07-12 15:48:06.728217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.747 15:48:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.747 15:48:06 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:09.747 15:48:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:10.675 15:48:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:10.932 15:48:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:10.932 15:48:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:10.932 15:48:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:10.932 15:48:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:10.932 15:48:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:11.188 Malloc1 00:11:11.188 15:48:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:11.444 15:48:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:11.700 15:48:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:11.956 15:48:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:11.956 15:48:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:11.956 15:48:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:12.214 Malloc2 00:11:12.214 15:48:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:12.470 15:48:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:12.727 15:48:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:12.983 15:48:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:12.983 15:48:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:12.983 15:48:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:12.983 15:48:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:12.983 15:48:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:12.983 15:48:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:12.983 [2024-07-12 15:48:10.224633] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:11:12.983 [2024-07-12 15:48:10.224678] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid702175 ] 00:11:12.983 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.983 [2024-07-12 15:48:10.259143] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:12.983 [2024-07-12 15:48:10.267230] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:12.983 [2024-07-12 15:48:10.267258] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1759157000 00:11:12.983 [2024-07-12 15:48:10.268221] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:12.983 [2024-07-12 15:48:10.269219] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:12.983 [2024-07-12 15:48:10.270225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:12.983 [2024-07-12 15:48:10.271230] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:12.983 [2024-07-12 15:48:10.272240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:12.983 [2024-07-12 15:48:10.273243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:12.983 [2024-07-12 15:48:10.274267] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:12.983 [2024-07-12 15:48:10.275268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:12.983 [2024-07-12 15:48:10.276276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:12.983 [2024-07-12 15:48:10.276297] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f175914c000 00:11:13.242 [2024-07-12 15:48:10.277475] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:13.242 [2024-07-12 15:48:10.293423] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:13.242 [2024-07-12 15:48:10.293461] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:11:13.242 [2024-07-12 15:48:10.298413] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:13.242 [2024-07-12 15:48:10.298467] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:13.242 [2024-07-12 15:48:10.298564] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:11:13.242 [2024-07-12 15:48:10.298589] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:11:13.242 [2024-07-12 15:48:10.298600] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:11:13.242 [2024-07-12 15:48:10.299405] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:13.242 [2024-07-12 15:48:10.299424] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:11:13.242 [2024-07-12 15:48:10.299436] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:11:13.242 [2024-07-12 15:48:10.300406] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:13.242 [2024-07-12 15:48:10.300424] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:11:13.242 [2024-07-12 15:48:10.300436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:11:13.242 [2024-07-12 15:48:10.301414] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:13.242 [2024-07-12 15:48:10.301432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:13.242 [2024-07-12 15:48:10.302423] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:13.242 [2024-07-12 15:48:10.302442] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:11:13.242 [2024-07-12 15:48:10.302451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:11:13.242 [2024-07-12 15:48:10.302462] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:13.242 [2024-07-12 15:48:10.302571] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:11:13.242 [2024-07-12 15:48:10.302583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:13.242 [2024-07-12 15:48:10.302592] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:13.242 [2024-07-12 15:48:10.303428] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:13.242 [2024-07-12 15:48:10.304429] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:13.242 [2024-07-12 15:48:10.305435] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:13.242 [2024-07-12 15:48:10.306430] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:13.242 [2024-07-12 15:48:10.306529] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:13.242 [2024-07-12 15:48:10.307450] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:13.242 [2024-07-12 15:48:10.307467] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:13.242 [2024-07-12 15:48:10.307476] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.307499] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:11:13.243 [2024-07-12 15:48:10.307512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.307536] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:13.243 [2024-07-12 15:48:10.307545] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:13.243 [2024-07-12 15:48:10.307563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.307615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.307629] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:11:13.243 [2024-07-12 15:48:10.307637] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:11:13.243 [2024-07-12 15:48:10.307644] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:11:13.243 [2024-07-12 15:48:10.307652] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:13.243 [2024-07-12 15:48:10.307659] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:11:13.243 [2024-07-12 15:48:10.307666] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:11:13.243 [2024-07-12 15:48:10.307674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.307685] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.307703] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.307735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.307767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.243 [2024-07-12 15:48:10.307781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.243 [2024-07-12 15:48:10.307793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.243 [2024-07-12 15:48:10.307805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:13.243 [2024-07-12 15:48:10.307814] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.307830] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.307844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.307857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.307867] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:11:13.243 [2024-07-12 15:48:10.307875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.307890] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.307900] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.307913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.307925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.307993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308009] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308030] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:13.243 [2024-07-12 15:48:10.308054] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:13.243 [2024-07-12 15:48:10.308064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.308083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.308114] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:11:13.243 [2024-07-12 15:48:10.308136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308162] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:13.243 [2024-07-12 15:48:10.308170] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:13.243 [2024-07-12 15:48:10.308182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.308205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.308226] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308251] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:13.243 [2024-07-12 15:48:10.308259] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:13.243 [2024-07-12 15:48:10.308268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.308279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.308292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308316] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308325] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308341] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308348] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:11:13.243 [2024-07-12 15:48:10.308355] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:11:13.243 [2024-07-12 15:48:10.308363] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:11:13.243 [2024-07-12 15:48:10.308387] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.308404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.308422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.308434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.308449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.308461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.308476] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.308487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.308512] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:13.243 [2024-07-12 15:48:10.308522] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:13.243 [2024-07-12 15:48:10.308528] nvme_pcie_common.c:1240:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:13.243 [2024-07-12 15:48:10.308534] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:13.243 [2024-07-12 15:48:10.308543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:13.243 [2024-07-12 15:48:10.308554] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:13.243 [2024-07-12 15:48:10.308562] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:13.243 [2024-07-12 15:48:10.308570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.308581] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:13.243 [2024-07-12 15:48:10.308589] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:13.243 [2024-07-12 15:48:10.308597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.308609] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:13.243 [2024-07-12 15:48:10.308617] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:13.243 [2024-07-12 15:48:10.308625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:13.243 [2024-07-12 15:48:10.308636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.308655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.308674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:13.243 [2024-07-12 15:48:10.308686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:13.243 ===================================================== 00:11:13.243 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:13.243 ===================================================== 00:11:13.243 Controller Capabilities/Features 00:11:13.243 ================================ 00:11:13.243 Vendor ID: 4e58 00:11:13.244 Subsystem Vendor ID: 4e58 00:11:13.244 Serial Number: SPDK1 00:11:13.244 Model Number: SPDK bdev Controller 00:11:13.244 Firmware Version: 24.09 00:11:13.244 Recommended Arb Burst: 6 00:11:13.244 IEEE OUI Identifier: 8d 6b 50 00:11:13.244 Multi-path I/O 00:11:13.244 May have multiple subsystem ports: Yes 00:11:13.244 May have multiple controllers: Yes 00:11:13.244 Associated with SR-IOV VF: No 00:11:13.244 Max Data Transfer Size: 131072 00:11:13.244 Max Number of Namespaces: 32 00:11:13.244 Max Number of I/O Queues: 127 00:11:13.244 NVMe Specification Version (VS): 1.3 00:11:13.244 NVMe Specification Version (Identify): 1.3 00:11:13.244 Maximum Queue Entries: 256 00:11:13.244 Contiguous Queues Required: Yes 00:11:13.244 Arbitration Mechanisms Supported 00:11:13.244 Weighted Round Robin: Not Supported 00:11:13.244 Vendor Specific: Not Supported 00:11:13.244 Reset Timeout: 15000 ms 00:11:13.244 Doorbell Stride: 4 bytes 00:11:13.244 NVM Subsystem Reset: Not Supported 00:11:13.244 Command Sets Supported 00:11:13.244 NVM Command Set: Supported 00:11:13.244 Boot Partition: Not Supported 00:11:13.244 Memory Page Size Minimum: 4096 bytes 00:11:13.244 Memory Page Size Maximum: 4096 bytes 00:11:13.244 Persistent Memory Region: Not Supported 00:11:13.244 Optional Asynchronous Events Supported 00:11:13.244 Namespace Attribute Notices: Supported 00:11:13.244 Firmware Activation Notices: Not Supported 00:11:13.244 ANA Change Notices: Not Supported 00:11:13.244 PLE Aggregate Log Change Notices: Not Supported 00:11:13.244 LBA Status Info Alert Notices: Not Supported 00:11:13.244 EGE Aggregate Log Change Notices: Not Supported 00:11:13.244 Normal NVM Subsystem Shutdown event: Not Supported 00:11:13.244 Zone Descriptor Change Notices: Not Supported 00:11:13.244 Discovery Log Change Notices: Not Supported 00:11:13.244 Controller Attributes 00:11:13.244 128-bit Host Identifier: Supported 00:11:13.244 Non-Operational Permissive Mode: Not Supported 00:11:13.244 NVM Sets: Not Supported 00:11:13.244 Read Recovery Levels: Not Supported 00:11:13.244 Endurance Groups: Not Supported 00:11:13.244 Predictable Latency Mode: Not Supported 00:11:13.244 Traffic Based Keep ALive: Not Supported 00:11:13.244 Namespace Granularity: Not Supported 00:11:13.244 SQ Associations: Not Supported 00:11:13.244 UUID List: Not Supported 00:11:13.244 Multi-Domain Subsystem: Not Supported 00:11:13.244 Fixed Capacity Management: Not Supported 00:11:13.244 Variable Capacity Management: Not Supported 00:11:13.244 Delete Endurance Group: Not Supported 00:11:13.244 Delete NVM Set: Not Supported 00:11:13.244 Extended LBA Formats Supported: Not Supported 00:11:13.244 Flexible Data Placement Supported: Not Supported 00:11:13.244 00:11:13.244 Controller Memory Buffer Support 00:11:13.244 ================================ 00:11:13.244 Supported: No 00:11:13.244 00:11:13.244 Persistent Memory Region Support 00:11:13.244 ================================ 00:11:13.244 Supported: No 00:11:13.244 00:11:13.244 Admin Command Set Attributes 00:11:13.244 ============================ 00:11:13.244 Security Send/Receive: Not Supported 00:11:13.244 Format NVM: Not Supported 00:11:13.244 Firmware Activate/Download: Not Supported 00:11:13.244 Namespace Management: Not Supported 00:11:13.244 Device Self-Test: Not Supported 00:11:13.244 Directives: Not Supported 00:11:13.244 NVMe-MI: Not Supported 00:11:13.244 Virtualization Management: Not Supported 00:11:13.244 Doorbell Buffer Config: Not Supported 00:11:13.244 Get LBA Status Capability: Not Supported 00:11:13.244 Command & Feature Lockdown Capability: Not Supported 00:11:13.244 Abort Command Limit: 4 00:11:13.244 Async Event Request Limit: 4 00:11:13.244 Number of Firmware Slots: N/A 00:11:13.244 Firmware Slot 1 Read-Only: N/A 00:11:13.244 Firmware Activation Without Reset: N/A 00:11:13.244 Multiple Update Detection Support: N/A 00:11:13.244 Firmware Update Granularity: No Information Provided 00:11:13.244 Per-Namespace SMART Log: No 00:11:13.244 Asymmetric Namespace Access Log Page: Not Supported 00:11:13.244 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:13.244 Command Effects Log Page: Supported 00:11:13.244 Get Log Page Extended Data: Supported 00:11:13.244 Telemetry Log Pages: Not Supported 00:11:13.244 Persistent Event Log Pages: Not Supported 00:11:13.244 Supported Log Pages Log Page: May Support 00:11:13.244 Commands Supported & Effects Log Page: Not Supported 00:11:13.244 Feature Identifiers & Effects Log Page:May Support 00:11:13.244 NVMe-MI Commands & Effects Log Page: May Support 00:11:13.244 Data Area 4 for Telemetry Log: Not Supported 00:11:13.244 Error Log Page Entries Supported: 128 00:11:13.244 Keep Alive: Supported 00:11:13.244 Keep Alive Granularity: 10000 ms 00:11:13.244 00:11:13.244 NVM Command Set Attributes 00:11:13.244 ========================== 00:11:13.244 Submission Queue Entry Size 00:11:13.244 Max: 64 00:11:13.244 Min: 64 00:11:13.244 Completion Queue Entry Size 00:11:13.244 Max: 16 00:11:13.244 Min: 16 00:11:13.244 Number of Namespaces: 32 00:11:13.244 Compare Command: Supported 00:11:13.244 Write Uncorrectable Command: Not Supported 00:11:13.244 Dataset Management Command: Supported 00:11:13.244 Write Zeroes Command: Supported 00:11:13.244 Set Features Save Field: Not Supported 00:11:13.244 Reservations: Not Supported 00:11:13.244 Timestamp: Not Supported 00:11:13.244 Copy: Supported 00:11:13.244 Volatile Write Cache: Present 00:11:13.244 Atomic Write Unit (Normal): 1 00:11:13.244 Atomic Write Unit (PFail): 1 00:11:13.244 Atomic Compare & Write Unit: 1 00:11:13.244 Fused Compare & Write: Supported 00:11:13.244 Scatter-Gather List 00:11:13.244 SGL Command Set: Supported (Dword aligned) 00:11:13.244 SGL Keyed: Not Supported 00:11:13.244 SGL Bit Bucket Descriptor: Not Supported 00:11:13.244 SGL Metadata Pointer: Not Supported 00:11:13.244 Oversized SGL: Not Supported 00:11:13.244 SGL Metadata Address: Not Supported 00:11:13.244 SGL Offset: Not Supported 00:11:13.244 Transport SGL Data Block: Not Supported 00:11:13.244 Replay Protected Memory Block: Not Supported 00:11:13.244 00:11:13.244 Firmware Slot Information 00:11:13.244 ========================= 00:11:13.244 Active slot: 1 00:11:13.244 Slot 1 Firmware Revision: 24.09 00:11:13.244 00:11:13.244 00:11:13.244 Commands Supported and Effects 00:11:13.244 ============================== 00:11:13.244 Admin Commands 00:11:13.244 -------------- 00:11:13.244 Get Log Page (02h): Supported 00:11:13.244 Identify (06h): Supported 00:11:13.244 Abort (08h): Supported 00:11:13.244 Set Features (09h): Supported 00:11:13.244 Get Features (0Ah): Supported 00:11:13.244 Asynchronous Event Request (0Ch): Supported 00:11:13.244 Keep Alive (18h): Supported 00:11:13.244 I/O Commands 00:11:13.244 ------------ 00:11:13.244 Flush (00h): Supported LBA-Change 00:11:13.244 Write (01h): Supported LBA-Change 00:11:13.244 Read (02h): Supported 00:11:13.244 Compare (05h): Supported 00:11:13.244 Write Zeroes (08h): Supported LBA-Change 00:11:13.244 Dataset Management (09h): Supported LBA-Change 00:11:13.244 Copy (19h): Supported LBA-Change 00:11:13.244 00:11:13.244 Error Log 00:11:13.244 ========= 00:11:13.244 00:11:13.244 Arbitration 00:11:13.244 =========== 00:11:13.244 Arbitration Burst: 1 00:11:13.244 00:11:13.244 Power Management 00:11:13.244 ================ 00:11:13.244 Number of Power States: 1 00:11:13.244 Current Power State: Power State #0 00:11:13.244 Power State #0: 00:11:13.244 Max Power: 0.00 W 00:11:13.244 Non-Operational State: Operational 00:11:13.244 Entry Latency: Not Reported 00:11:13.244 Exit Latency: Not Reported 00:11:13.244 Relative Read Throughput: 0 00:11:13.244 Relative Read Latency: 0 00:11:13.244 Relative Write Throughput: 0 00:11:13.244 Relative Write Latency: 0 00:11:13.244 Idle Power: Not Reported 00:11:13.244 Active Power: Not Reported 00:11:13.244 Non-Operational Permissive Mode: Not Supported 00:11:13.244 00:11:13.244 Health Information 00:11:13.244 ================== 00:11:13.244 Critical Warnings: 00:11:13.245 Available Spare Space: OK 00:11:13.245 Temperature: OK 00:11:13.245 Device Reliability: OK 00:11:13.245 Read Only: No 00:11:13.245 Volatile Memory Backup: OK 00:11:13.245 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:13.245 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:13.245 Available Spare: 0% 00:11:13.245 Available Sp[2024-07-12 15:48:10.308827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:13.245 [2024-07-12 15:48:10.308845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:13.245 [2024-07-12 15:48:10.308884] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:11:13.245 [2024-07-12 15:48:10.308901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.245 [2024-07-12 15:48:10.308912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.245 [2024-07-12 15:48:10.308922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.245 [2024-07-12 15:48:10.308931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:13.245 [2024-07-12 15:48:10.312750] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:13.245 [2024-07-12 15:48:10.312788] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:13.245 [2024-07-12 15:48:10.313476] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:13.245 [2024-07-12 15:48:10.313555] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:11:13.245 [2024-07-12 15:48:10.313568] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:11:13.245 [2024-07-12 15:48:10.314495] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:13.245 [2024-07-12 15:48:10.314518] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:11:13.245 [2024-07-12 15:48:10.314571] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:13.245 [2024-07-12 15:48:10.316528] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:13.245 are Threshold: 0% 00:11:13.245 Life Percentage Used: 0% 00:11:13.245 Data Units Read: 0 00:11:13.245 Data Units Written: 0 00:11:13.245 Host Read Commands: 0 00:11:13.245 Host Write Commands: 0 00:11:13.245 Controller Busy Time: 0 minutes 00:11:13.245 Power Cycles: 0 00:11:13.245 Power On Hours: 0 hours 00:11:13.245 Unsafe Shutdowns: 0 00:11:13.245 Unrecoverable Media Errors: 0 00:11:13.245 Lifetime Error Log Entries: 0 00:11:13.245 Warning Temperature Time: 0 minutes 00:11:13.245 Critical Temperature Time: 0 minutes 00:11:13.245 00:11:13.245 Number of Queues 00:11:13.245 ================ 00:11:13.245 Number of I/O Submission Queues: 127 00:11:13.245 Number of I/O Completion Queues: 127 00:11:13.245 00:11:13.245 Active Namespaces 00:11:13.245 ================= 00:11:13.245 Namespace ID:1 00:11:13.245 Error Recovery Timeout: Unlimited 00:11:13.245 Command Set Identifier: NVM (00h) 00:11:13.245 Deallocate: Supported 00:11:13.245 Deallocated/Unwritten Error: Not Supported 00:11:13.245 Deallocated Read Value: Unknown 00:11:13.245 Deallocate in Write Zeroes: Not Supported 00:11:13.245 Deallocated Guard Field: 0xFFFF 00:11:13.245 Flush: Supported 00:11:13.245 Reservation: Supported 00:11:13.245 Namespace Sharing Capabilities: Multiple Controllers 00:11:13.245 Size (in LBAs): 131072 (0GiB) 00:11:13.245 Capacity (in LBAs): 131072 (0GiB) 00:11:13.245 Utilization (in LBAs): 131072 (0GiB) 00:11:13.245 NGUID: A6318B975A2D4ED487BAE0191DA7ACF5 00:11:13.245 UUID: a6318b97-5a2d-4ed4-87ba-e0191da7acf5 00:11:13.245 Thin Provisioning: Not Supported 00:11:13.245 Per-NS Atomic Units: Yes 00:11:13.245 Atomic Boundary Size (Normal): 0 00:11:13.245 Atomic Boundary Size (PFail): 0 00:11:13.245 Atomic Boundary Offset: 0 00:11:13.245 Maximum Single Source Range Length: 65535 00:11:13.245 Maximum Copy Length: 65535 00:11:13.245 Maximum Source Range Count: 1 00:11:13.245 NGUID/EUI64 Never Reused: No 00:11:13.245 Namespace Write Protected: No 00:11:13.245 Number of LBA Formats: 1 00:11:13.245 Current LBA Format: LBA Format #00 00:11:13.245 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:13.245 00:11:13.245 15:48:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:13.245 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.553 [2024-07-12 15:48:10.547566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:18.836 Initializing NVMe Controllers 00:11:18.836 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:18.836 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:18.836 Initialization complete. Launching workers. 00:11:18.836 ======================================================== 00:11:18.836 Latency(us) 00:11:18.836 Device Information : IOPS MiB/s Average min max 00:11:18.836 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34821.07 136.02 3675.98 1165.02 9817.20 00:11:18.836 ======================================================== 00:11:18.836 Total : 34821.07 136.02 3675.98 1165.02 9817.20 00:11:18.836 00:11:18.836 [2024-07-12 15:48:15.570150] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:18.836 15:48:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:18.836 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.836 [2024-07-12 15:48:15.801274] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:24.094 Initializing NVMe Controllers 00:11:24.094 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:24.094 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:11:24.094 Initialization complete. Launching workers. 00:11:24.094 ======================================================== 00:11:24.094 Latency(us) 00:11:24.094 Device Information : IOPS MiB/s Average min max 00:11:24.094 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16000.00 62.50 8009.83 6971.89 15969.30 00:11:24.094 ======================================================== 00:11:24.094 Total : 16000.00 62.50 8009.83 6971.89 15969.30 00:11:24.094 00:11:24.094 [2024-07-12 15:48:20.837817] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:24.094 15:48:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:24.094 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.094 [2024-07-12 15:48:21.048874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:29.352 [2024-07-12 15:48:26.127120] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:29.352 Initializing NVMe Controllers 00:11:29.352 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:29.352 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:29.352 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:11:29.352 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:11:29.352 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:11:29.352 Initialization complete. Launching workers. 00:11:29.352 Starting thread on core 2 00:11:29.352 Starting thread on core 3 00:11:29.352 Starting thread on core 1 00:11:29.352 15:48:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:11:29.352 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.352 [2024-07-12 15:48:26.446228] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:32.634 [2024-07-12 15:48:29.508316] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:32.634 Initializing NVMe Controllers 00:11:32.634 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:32.634 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:32.634 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:11:32.634 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:11:32.634 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:11:32.634 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:11:32.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:32.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:32.634 Initialization complete. Launching workers. 00:11:32.634 Starting thread on core 1 with urgent priority queue 00:11:32.634 Starting thread on core 2 with urgent priority queue 00:11:32.635 Starting thread on core 3 with urgent priority queue 00:11:32.635 Starting thread on core 0 with urgent priority queue 00:11:32.635 SPDK bdev Controller (SPDK1 ) core 0: 4496.67 IO/s 22.24 secs/100000 ios 00:11:32.635 SPDK bdev Controller (SPDK1 ) core 1: 5114.67 IO/s 19.55 secs/100000 ios 00:11:32.635 SPDK bdev Controller (SPDK1 ) core 2: 5277.33 IO/s 18.95 secs/100000 ios 00:11:32.635 SPDK bdev Controller (SPDK1 ) core 3: 5387.33 IO/s 18.56 secs/100000 ios 00:11:32.635 ======================================================== 00:11:32.635 00:11:32.635 15:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:32.635 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.635 [2024-07-12 15:48:29.813203] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:32.635 Initializing NVMe Controllers 00:11:32.635 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:32.635 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:32.635 Namespace ID: 1 size: 0GB 00:11:32.635 Initialization complete. 00:11:32.635 INFO: using host memory buffer for IO 00:11:32.635 Hello world! 00:11:32.635 [2024-07-12 15:48:29.849787] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:32.635 15:48:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:11:32.892 EAL: No free 2048 kB hugepages reported on node 1 00:11:32.892 [2024-07-12 15:48:30.149680] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:34.265 Initializing NVMe Controllers 00:11:34.265 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:34.265 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:34.265 Initialization complete. Launching workers. 00:11:34.265 submit (in ns) avg, min, max = 8272.0, 3544.4, 4014576.7 00:11:34.265 complete (in ns) avg, min, max = 25541.2, 2071.1, 4017132.2 00:11:34.265 00:11:34.265 Submit histogram 00:11:34.265 ================ 00:11:34.265 Range in us Cumulative Count 00:11:34.265 3.532 - 3.556: 0.0986% ( 13) 00:11:34.265 3.556 - 3.579: 0.6521% ( 73) 00:11:34.265 3.579 - 3.603: 1.9336% ( 169) 00:11:34.265 3.603 - 3.627: 5.9221% ( 526) 00:11:34.265 3.627 - 3.650: 12.8147% ( 909) 00:11:34.265 3.650 - 3.674: 21.8305% ( 1189) 00:11:34.265 3.674 - 3.698: 30.8993% ( 1196) 00:11:34.265 3.698 - 3.721: 40.4610% ( 1261) 00:11:34.265 3.721 - 3.745: 47.4446% ( 921) 00:11:34.265 3.745 - 3.769: 53.5866% ( 810) 00:11:34.265 3.769 - 3.793: 57.7419% ( 548) 00:11:34.265 3.793 - 3.816: 61.2223% ( 459) 00:11:34.265 3.816 - 3.840: 64.2326% ( 397) 00:11:34.265 3.840 - 3.864: 67.4401% ( 423) 00:11:34.265 3.864 - 3.887: 70.9660% ( 465) 00:11:34.265 3.887 - 3.911: 75.1668% ( 554) 00:11:34.265 3.911 - 3.935: 79.5193% ( 574) 00:11:34.265 3.935 - 3.959: 82.9087% ( 447) 00:11:34.265 3.959 - 3.982: 85.5247% ( 345) 00:11:34.265 3.982 - 4.006: 87.3976% ( 247) 00:11:34.265 4.006 - 4.030: 88.8004% ( 185) 00:11:34.265 4.030 - 4.053: 90.2942% ( 197) 00:11:34.265 4.053 - 4.077: 91.3482% ( 139) 00:11:34.265 4.077 - 4.101: 92.2733% ( 122) 00:11:34.265 4.101 - 4.124: 93.0695% ( 105) 00:11:34.265 4.124 - 4.148: 94.0249% ( 126) 00:11:34.265 4.148 - 4.172: 94.6391% ( 81) 00:11:34.265 4.172 - 4.196: 95.1395% ( 66) 00:11:34.265 4.196 - 4.219: 95.5641% ( 56) 00:11:34.265 4.219 - 4.243: 95.8447% ( 37) 00:11:34.265 4.243 - 4.267: 96.0646% ( 29) 00:11:34.265 4.267 - 4.290: 96.2238% ( 21) 00:11:34.265 4.290 - 4.314: 96.3603% ( 18) 00:11:34.265 4.314 - 4.338: 96.5196% ( 21) 00:11:34.265 4.338 - 4.361: 96.5954% ( 10) 00:11:34.265 4.361 - 4.385: 96.7015% ( 14) 00:11:34.265 4.385 - 4.409: 96.7850% ( 11) 00:11:34.265 4.409 - 4.433: 96.8684% ( 11) 00:11:34.265 4.433 - 4.456: 96.9214% ( 7) 00:11:34.265 4.456 - 4.480: 97.0124% ( 12) 00:11:34.265 4.480 - 4.504: 97.0352% ( 3) 00:11:34.265 4.504 - 4.527: 97.0579% ( 3) 00:11:34.265 4.527 - 4.551: 97.0731% ( 2) 00:11:34.265 4.551 - 4.575: 97.0958% ( 3) 00:11:34.265 4.575 - 4.599: 97.1034% ( 1) 00:11:34.265 4.622 - 4.646: 97.1262% ( 3) 00:11:34.265 4.646 - 4.670: 97.1489% ( 3) 00:11:34.265 4.670 - 4.693: 97.1717% ( 3) 00:11:34.265 4.693 - 4.717: 97.1868% ( 2) 00:11:34.265 4.717 - 4.741: 97.2247% ( 5) 00:11:34.265 4.741 - 4.764: 97.2551% ( 4) 00:11:34.265 4.764 - 4.788: 97.3157% ( 8) 00:11:34.265 4.788 - 4.812: 97.3461% ( 4) 00:11:34.265 4.812 - 4.836: 97.3688% ( 3) 00:11:34.265 4.836 - 4.859: 97.4143% ( 6) 00:11:34.265 4.859 - 4.883: 97.4371% ( 3) 00:11:34.265 4.883 - 4.907: 97.4522% ( 2) 00:11:34.265 4.907 - 4.930: 97.4901% ( 5) 00:11:34.265 4.930 - 4.954: 97.5508% ( 8) 00:11:34.265 4.954 - 4.978: 97.5660% ( 2) 00:11:34.265 4.978 - 5.001: 97.5887% ( 3) 00:11:34.265 5.001 - 5.025: 97.6418% ( 7) 00:11:34.265 5.025 - 5.049: 97.6645% ( 3) 00:11:34.265 5.049 - 5.073: 97.6949% ( 4) 00:11:34.265 5.073 - 5.096: 97.7025% ( 1) 00:11:34.265 5.120 - 5.144: 97.7100% ( 1) 00:11:34.265 5.144 - 5.167: 97.7176% ( 1) 00:11:34.265 5.167 - 5.191: 97.7328% ( 2) 00:11:34.265 5.191 - 5.215: 97.7555% ( 3) 00:11:34.265 5.215 - 5.239: 97.7707% ( 2) 00:11:34.265 5.239 - 5.262: 97.7934% ( 3) 00:11:34.265 5.333 - 5.357: 97.8010% ( 1) 00:11:34.265 5.404 - 5.428: 97.8162% ( 2) 00:11:34.265 5.499 - 5.523: 97.8238% ( 1) 00:11:34.265 5.689 - 5.713: 97.8389% ( 2) 00:11:34.265 5.760 - 5.784: 97.8541% ( 2) 00:11:34.265 5.784 - 5.807: 97.8617% ( 1) 00:11:34.265 5.879 - 5.902: 97.8693% ( 1) 00:11:34.265 5.973 - 5.997: 97.8769% ( 1) 00:11:34.265 6.044 - 6.068: 97.8844% ( 1) 00:11:34.265 6.305 - 6.353: 97.8920% ( 1) 00:11:34.265 6.495 - 6.542: 97.8996% ( 1) 00:11:34.265 6.590 - 6.637: 97.9148% ( 2) 00:11:34.265 6.684 - 6.732: 97.9224% ( 1) 00:11:34.265 6.874 - 6.921: 97.9299% ( 1) 00:11:34.265 6.969 - 7.016: 97.9451% ( 2) 00:11:34.265 7.016 - 7.064: 97.9527% ( 1) 00:11:34.265 7.064 - 7.111: 97.9678% ( 2) 00:11:34.265 7.111 - 7.159: 97.9906% ( 3) 00:11:34.265 7.253 - 7.301: 98.0058% ( 2) 00:11:34.265 7.301 - 7.348: 98.0133% ( 1) 00:11:34.265 7.348 - 7.396: 98.0209% ( 1) 00:11:34.265 7.396 - 7.443: 98.0285% ( 1) 00:11:34.265 7.443 - 7.490: 98.0437% ( 2) 00:11:34.265 7.490 - 7.538: 98.0513% ( 1) 00:11:34.265 7.538 - 7.585: 98.0588% ( 1) 00:11:34.265 7.585 - 7.633: 98.0664% ( 1) 00:11:34.265 7.633 - 7.680: 98.0740% ( 1) 00:11:34.265 7.680 - 7.727: 98.0892% ( 2) 00:11:34.265 7.727 - 7.775: 98.0968% ( 1) 00:11:34.265 7.775 - 7.822: 98.1119% ( 2) 00:11:34.265 7.822 - 7.870: 98.1195% ( 1) 00:11:34.265 7.870 - 7.917: 98.1271% ( 1) 00:11:34.265 7.917 - 7.964: 98.1347% ( 1) 00:11:34.265 7.964 - 8.012: 98.1498% ( 2) 00:11:34.265 8.012 - 8.059: 98.1574% ( 1) 00:11:34.265 8.059 - 8.107: 98.1726% ( 2) 00:11:34.265 8.107 - 8.154: 98.1802% ( 1) 00:11:34.265 8.249 - 8.296: 98.2181% ( 5) 00:11:34.265 8.344 - 8.391: 98.2332% ( 2) 00:11:34.265 8.391 - 8.439: 98.2408% ( 1) 00:11:34.265 8.439 - 8.486: 98.2636% ( 3) 00:11:34.265 8.533 - 8.581: 98.2712% ( 1) 00:11:34.265 8.581 - 8.628: 98.2863% ( 2) 00:11:34.265 8.628 - 8.676: 98.2939% ( 1) 00:11:34.265 8.676 - 8.723: 98.3015% ( 1) 00:11:34.265 8.770 - 8.818: 98.3091% ( 1) 00:11:34.265 8.913 - 8.960: 98.3318% ( 3) 00:11:34.265 9.055 - 9.102: 98.3394% ( 1) 00:11:34.265 9.150 - 9.197: 98.3470% ( 1) 00:11:34.265 9.197 - 9.244: 98.3546% ( 1) 00:11:34.265 9.292 - 9.339: 98.3621% ( 1) 00:11:34.265 9.387 - 9.434: 98.3697% ( 1) 00:11:34.265 9.434 - 9.481: 98.3849% ( 2) 00:11:34.265 9.529 - 9.576: 98.3925% ( 1) 00:11:34.265 9.624 - 9.671: 98.4076% ( 2) 00:11:34.265 9.671 - 9.719: 98.4152% ( 1) 00:11:34.265 9.719 - 9.766: 98.4228% ( 1) 00:11:34.265 9.766 - 9.813: 98.4380% ( 2) 00:11:34.265 9.908 - 9.956: 98.4456% ( 1) 00:11:34.265 9.956 - 10.003: 98.4607% ( 2) 00:11:34.265 10.098 - 10.145: 98.4683% ( 1) 00:11:34.265 10.145 - 10.193: 98.4759% ( 1) 00:11:34.265 10.193 - 10.240: 98.4835% ( 1) 00:11:34.265 10.287 - 10.335: 98.4911% ( 1) 00:11:34.265 10.382 - 10.430: 98.4986% ( 1) 00:11:34.265 10.524 - 10.572: 98.5062% ( 1) 00:11:34.266 10.572 - 10.619: 98.5214% ( 2) 00:11:34.266 10.904 - 10.951: 98.5290% ( 1) 00:11:34.266 10.999 - 11.046: 98.5365% ( 1) 00:11:34.266 11.188 - 11.236: 98.5441% ( 1) 00:11:34.266 11.330 - 11.378: 98.5517% ( 1) 00:11:34.266 11.425 - 11.473: 98.5669% ( 2) 00:11:34.266 11.520 - 11.567: 98.5745% ( 1) 00:11:34.266 11.567 - 11.615: 98.5896% ( 2) 00:11:34.266 11.804 - 11.852: 98.5972% ( 1) 00:11:34.266 11.852 - 11.899: 98.6200% ( 3) 00:11:34.266 12.089 - 12.136: 98.6351% ( 2) 00:11:34.266 12.136 - 12.231: 98.6503% ( 2) 00:11:34.266 12.231 - 12.326: 98.6579% ( 1) 00:11:34.266 12.516 - 12.610: 98.6655% ( 1) 00:11:34.266 12.610 - 12.705: 98.6730% ( 1) 00:11:34.266 12.800 - 12.895: 98.6882% ( 2) 00:11:34.266 12.990 - 13.084: 98.6958% ( 1) 00:11:34.266 13.084 - 13.179: 98.7034% ( 1) 00:11:34.266 13.179 - 13.274: 98.7261% ( 3) 00:11:34.266 13.369 - 13.464: 98.7337% ( 1) 00:11:34.266 13.559 - 13.653: 98.7489% ( 2) 00:11:34.266 13.748 - 13.843: 98.7564% ( 1) 00:11:34.266 13.843 - 13.938: 98.7640% ( 1) 00:11:34.266 13.938 - 14.033: 98.7716% ( 1) 00:11:34.266 14.033 - 14.127: 98.7868% ( 2) 00:11:34.266 14.127 - 14.222: 98.7944% ( 1) 00:11:34.266 14.412 - 14.507: 98.8019% ( 1) 00:11:34.266 14.507 - 14.601: 98.8171% ( 2) 00:11:34.266 14.696 - 14.791: 98.8247% ( 1) 00:11:34.266 14.886 - 14.981: 98.8323% ( 1) 00:11:34.266 15.076 - 15.170: 98.8399% ( 1) 00:11:34.266 15.170 - 15.265: 98.8474% ( 1) 00:11:34.266 15.265 - 15.360: 98.8550% ( 1) 00:11:34.266 16.119 - 16.213: 98.8626% ( 1) 00:11:34.266 17.256 - 17.351: 98.8702% ( 1) 00:11:34.266 17.351 - 17.446: 98.8929% ( 3) 00:11:34.266 17.446 - 17.541: 98.9233% ( 4) 00:11:34.266 17.541 - 17.636: 98.9688% ( 6) 00:11:34.266 17.636 - 17.730: 98.9839% ( 2) 00:11:34.266 17.730 - 17.825: 99.0067% ( 3) 00:11:34.266 17.825 - 17.920: 99.0598% ( 7) 00:11:34.266 17.920 - 18.015: 99.1204% ( 8) 00:11:34.266 18.015 - 18.110: 99.1735% ( 7) 00:11:34.266 18.110 - 18.204: 99.2266% ( 7) 00:11:34.266 18.204 - 18.299: 99.2872% ( 8) 00:11:34.266 18.299 - 18.394: 99.3555% ( 9) 00:11:34.266 18.394 - 18.489: 99.4237% ( 9) 00:11:34.266 18.489 - 18.584: 99.4540% ( 4) 00:11:34.266 18.584 - 18.679: 99.5299% ( 10) 00:11:34.266 18.679 - 18.773: 99.5678% ( 5) 00:11:34.266 18.773 - 18.868: 99.5981% ( 4) 00:11:34.266 18.868 - 18.963: 99.6360% ( 5) 00:11:34.266 18.963 - 19.058: 99.6436% ( 1) 00:11:34.266 19.342 - 19.437: 99.6739% ( 4) 00:11:34.266 19.437 - 19.532: 99.6967% ( 3) 00:11:34.266 19.532 - 19.627: 99.7043% ( 1) 00:11:34.266 19.627 - 19.721: 99.7422% ( 5) 00:11:34.266 19.816 - 19.911: 99.7498% ( 1) 00:11:34.266 19.911 - 20.006: 99.7801% ( 4) 00:11:34.266 20.006 - 20.101: 99.7877% ( 1) 00:11:34.266 20.385 - 20.480: 99.7953% ( 1) 00:11:34.266 20.859 - 20.954: 99.8029% ( 1) 00:11:34.266 22.566 - 22.661: 99.8104% ( 1) 00:11:34.266 22.756 - 22.850: 99.8256% ( 2) 00:11:34.266 23.419 - 23.514: 99.8332% ( 1) 00:11:34.266 24.841 - 25.031: 99.8408% ( 1) 00:11:34.266 25.031 - 25.221: 99.8483% ( 1) 00:11:34.266 25.410 - 25.600: 99.8559% ( 1) 00:11:34.266 26.169 - 26.359: 99.8635% ( 1) 00:11:34.266 28.255 - 28.444: 99.8711% ( 1) 00:11:34.266 28.824 - 29.013: 99.8787% ( 1) 00:11:34.266 30.910 - 31.099: 99.8863% ( 1) 00:11:34.266 58.406 - 58.785: 99.8938% ( 1) 00:11:34.266 3980.705 - 4004.978: 99.9848% ( 12) 00:11:34.266 4004.978 - 4029.250: 100.0000% ( 2) 00:11:34.266 00:11:34.266 Complete histogram 00:11:34.266 ================== 00:11:34.266 Range in us Cumulative Count 00:11:34.266 2.062 - 2.074: 0.0607% ( 8) 00:11:34.266 2.074 - 2.086: 6.1723% ( 806) 00:11:34.266 2.086 - 2.098: 13.8838% ( 1017) 00:11:34.266 2.098 - 2.110: 19.1386% ( 693) 00:11:34.266 2.110 - 2.121: 50.1896% ( 4095) 00:11:34.266 2.121 - 2.133: 60.2062% ( 1321) 00:11:34.266 2.133 - 2.145: 62.4204% ( 292) 00:11:34.266 2.145 - 2.157: 66.1435% ( 491) 00:11:34.266 2.157 - 2.169: 67.7282% ( 209) 00:11:34.266 2.169 - 2.181: 70.4732% ( 362) 00:11:34.266 2.181 - 2.193: 78.4729% ( 1055) 00:11:34.266 2.193 - 2.204: 81.1647% ( 355) 00:11:34.266 2.204 - 2.216: 81.8699% ( 93) 00:11:34.266 2.216 - 2.228: 83.5153% ( 217) 00:11:34.266 2.228 - 2.240: 84.8044% ( 170) 00:11:34.266 2.240 - 2.252: 87.1777% ( 313) 00:11:34.266 2.252 - 2.264: 90.8857% ( 489) 00:11:34.266 2.264 - 2.276: 92.8344% ( 257) 00:11:34.266 2.276 - 2.287: 93.4486% ( 81) 00:11:34.266 2.287 - 2.299: 93.9490% ( 66) 00:11:34.266 2.299 - 2.311: 94.4116% ( 61) 00:11:34.266 2.311 - 2.323: 94.8665% ( 60) 00:11:34.266 2.323 - 2.335: 95.1168% ( 33) 00:11:34.266 2.335 - 2.347: 95.2760% ( 21) 00:11:34.266 2.347 - 2.359: 95.4732% ( 26) 00:11:34.266 2.359 - 2.370: 95.5566% ( 11) 00:11:34.266 2.370 - 2.382: 95.6931% ( 18) 00:11:34.266 2.382 - 2.394: 95.9281% ( 31) 00:11:34.266 2.394 - 2.406: 96.2466% ( 42) 00:11:34.266 2.406 - 2.418: 96.5347% ( 38) 00:11:34.266 2.418 - 2.430: 96.8305% ( 39) 00:11:34.266 2.430 - 2.441: 97.2020% ( 49) 00:11:34.266 2.441 - 2.453: 97.4446% ( 32) 00:11:34.266 2.453 - 2.465: 97.7100% ( 35) 00:11:34.266 2.465 - 2.477: 97.9072% ( 26) 00:11:34.266 2.477 - 2.489: 98.0285% ( 16) 00:11:34.266 2.489 - 2.501: 98.1043% ( 10) 00:11:34.266 2.501 - 2.513: 98.1498% ( 6) 00:11:34.266 2.513 - 2.524: 98.1953% ( 6) 00:11:34.266 2.524 - 2.536: 9[2024-07-12 15:48:31.170923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:34.266 8.2787% ( 11) 00:11:34.266 2.536 - 2.548: 98.3167% ( 5) 00:11:34.266 2.548 - 2.560: 98.3773% ( 8) 00:11:34.266 2.560 - 2.572: 98.4076% ( 4) 00:11:34.266 2.572 - 2.584: 98.4304% ( 3) 00:11:34.266 2.584 - 2.596: 98.4456% ( 2) 00:11:34.266 2.596 - 2.607: 98.4607% ( 2) 00:11:34.266 2.607 - 2.619: 98.4683% ( 1) 00:11:34.266 2.619 - 2.631: 98.4759% ( 1) 00:11:34.266 2.631 - 2.643: 98.4835% ( 1) 00:11:34.266 2.655 - 2.667: 98.4986% ( 2) 00:11:34.266 2.679 - 2.690: 98.5138% ( 2) 00:11:34.266 2.702 - 2.714: 98.5214% ( 1) 00:11:34.266 2.738 - 2.750: 98.5365% ( 2) 00:11:34.266 2.856 - 2.868: 98.5441% ( 1) 00:11:34.266 3.129 - 3.153: 98.5517% ( 1) 00:11:34.266 3.224 - 3.247: 98.5593% ( 1) 00:11:34.266 3.247 - 3.271: 98.5745% ( 2) 00:11:34.266 3.295 - 3.319: 98.5820% ( 1) 00:11:34.266 3.319 - 3.342: 98.5896% ( 1) 00:11:34.266 3.342 - 3.366: 98.6275% ( 5) 00:11:34.266 3.366 - 3.390: 98.6351% ( 1) 00:11:34.266 3.390 - 3.413: 98.6427% ( 1) 00:11:34.266 3.413 - 3.437: 98.6503% ( 1) 00:11:34.266 3.461 - 3.484: 98.6882% ( 5) 00:11:34.266 3.627 - 3.650: 98.6958% ( 1) 00:11:34.266 3.721 - 3.745: 98.7034% ( 1) 00:11:34.266 3.769 - 3.793: 98.7109% ( 1) 00:11:34.266 3.864 - 3.887: 98.7185% ( 1) 00:11:34.266 3.887 - 3.911: 98.7413% ( 3) 00:11:34.266 3.935 - 3.959: 98.7489% ( 1) 00:11:34.266 3.959 - 3.982: 98.7564% ( 1) 00:11:34.266 4.053 - 4.077: 98.7640% ( 1) 00:11:34.266 4.622 - 4.646: 98.7716% ( 1) 00:11:34.266 4.978 - 5.001: 98.7792% ( 1) 00:11:34.266 5.001 - 5.025: 98.7868% ( 1) 00:11:34.266 5.570 - 5.594: 98.7944% ( 1) 00:11:34.266 5.594 - 5.618: 98.8019% ( 1) 00:11:34.266 5.665 - 5.689: 98.8095% ( 1) 00:11:34.266 5.760 - 5.784: 98.8171% ( 1) 00:11:34.266 5.807 - 5.831: 98.8247% ( 1) 00:11:34.266 5.855 - 5.879: 98.8323% ( 1) 00:11:34.266 5.926 - 5.950: 98.8399% ( 1) 00:11:34.266 5.950 - 5.973: 98.8550% ( 2) 00:11:34.266 6.068 - 6.116: 98.8702% ( 2) 00:11:34.266 6.116 - 6.163: 98.8854% ( 2) 00:11:34.266 6.258 - 6.305: 98.8929% ( 1) 00:11:34.266 6.353 - 6.400: 98.9005% ( 1) 00:11:34.266 6.495 - 6.542: 98.9081% ( 1) 00:11:34.266 6.542 - 6.590: 98.9157% ( 1) 00:11:34.266 6.969 - 7.016: 98.9233% ( 1) 00:11:34.266 7.064 - 7.111: 98.9384% ( 2) 00:11:34.266 7.775 - 7.822: 98.9460% ( 1) 00:11:34.266 7.822 - 7.870: 98.9536% ( 1) 00:11:34.266 8.059 - 8.107: 98.9612% ( 1) 00:11:34.266 8.581 - 8.628: 98.9688% ( 1) 00:11:34.266 10.193 - 10.240: 98.9763% ( 1) 00:11:34.266 15.455 - 15.550: 98.9839% ( 1) 00:11:34.266 15.550 - 15.644: 98.9915% ( 1) 00:11:34.266 15.644 - 15.739: 99.0067% ( 2) 00:11:34.266 15.834 - 15.929: 99.0218% ( 2) 00:11:34.266 15.929 - 16.024: 99.0522% ( 4) 00:11:34.266 16.024 - 16.119: 99.0825% ( 4) 00:11:34.266 16.119 - 16.213: 99.1052% ( 3) 00:11:34.266 16.213 - 16.308: 99.1128% ( 1) 00:11:34.266 16.308 - 16.403: 99.1280% ( 2) 00:11:34.266 16.498 - 16.593: 99.1811% ( 7) 00:11:34.266 16.593 - 16.687: 99.2114% ( 4) 00:11:34.266 16.687 - 16.782: 99.2569% ( 6) 00:11:34.266 16.782 - 16.877: 99.3024% ( 6) 00:11:34.266 16.877 - 16.972: 99.3327% ( 4) 00:11:34.266 17.067 - 17.161: 99.3479% ( 2) 00:11:34.266 17.161 - 17.256: 99.3631% ( 2) 00:11:34.266 17.351 - 17.446: 99.3706% ( 1) 00:11:34.266 17.541 - 17.636: 99.3782% ( 1) 00:11:34.266 17.636 - 17.730: 99.3934% ( 2) 00:11:34.266 17.825 - 17.920: 99.4010% ( 1) 00:11:34.266 18.015 - 18.110: 99.4086% ( 1) 00:11:34.266 18.299 - 18.394: 99.4161% ( 1) 00:11:34.267 3325.345 - 3349.618: 99.4237% ( 1) 00:11:34.267 3980.705 - 4004.978: 99.8332% ( 54) 00:11:34.267 4004.978 - 4029.250: 100.0000% ( 22) 00:11:34.267 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:34.267 [ 00:11:34.267 { 00:11:34.267 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:34.267 "subtype": "Discovery", 00:11:34.267 "listen_addresses": [], 00:11:34.267 "allow_any_host": true, 00:11:34.267 "hosts": [] 00:11:34.267 }, 00:11:34.267 { 00:11:34.267 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:34.267 "subtype": "NVMe", 00:11:34.267 "listen_addresses": [ 00:11:34.267 { 00:11:34.267 "trtype": "VFIOUSER", 00:11:34.267 "adrfam": "IPv4", 00:11:34.267 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:34.267 "trsvcid": "0" 00:11:34.267 } 00:11:34.267 ], 00:11:34.267 "allow_any_host": true, 00:11:34.267 "hosts": [], 00:11:34.267 "serial_number": "SPDK1", 00:11:34.267 "model_number": "SPDK bdev Controller", 00:11:34.267 "max_namespaces": 32, 00:11:34.267 "min_cntlid": 1, 00:11:34.267 "max_cntlid": 65519, 00:11:34.267 "namespaces": [ 00:11:34.267 { 00:11:34.267 "nsid": 1, 00:11:34.267 "bdev_name": "Malloc1", 00:11:34.267 "name": "Malloc1", 00:11:34.267 "nguid": "A6318B975A2D4ED487BAE0191DA7ACF5", 00:11:34.267 "uuid": "a6318b97-5a2d-4ed4-87ba-e0191da7acf5" 00:11:34.267 } 00:11:34.267 ] 00:11:34.267 }, 00:11:34.267 { 00:11:34.267 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:34.267 "subtype": "NVMe", 00:11:34.267 "listen_addresses": [ 00:11:34.267 { 00:11:34.267 "trtype": "VFIOUSER", 00:11:34.267 "adrfam": "IPv4", 00:11:34.267 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:34.267 "trsvcid": "0" 00:11:34.267 } 00:11:34.267 ], 00:11:34.267 "allow_any_host": true, 00:11:34.267 "hosts": [], 00:11:34.267 "serial_number": "SPDK2", 00:11:34.267 "model_number": "SPDK bdev Controller", 00:11:34.267 "max_namespaces": 32, 00:11:34.267 "min_cntlid": 1, 00:11:34.267 "max_cntlid": 65519, 00:11:34.267 "namespaces": [ 00:11:34.267 { 00:11:34.267 "nsid": 1, 00:11:34.267 "bdev_name": "Malloc2", 00:11:34.267 "name": "Malloc2", 00:11:34.267 "nguid": "CDBD641E901B472C9FEE2E7682E891A8", 00:11:34.267 "uuid": "cdbd641e-901b-472c-9fee-2e7682e891a8" 00:11:34.267 } 00:11:34.267 ] 00:11:34.267 } 00:11:34.267 ] 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=704700 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:34.267 15:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:11:34.526 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.526 [2024-07-12 15:48:31.667030] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:34.526 Malloc3 00:11:34.526 15:48:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:11:34.783 [2024-07-12 15:48:32.030802] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:34.783 15:48:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:35.041 Asynchronous Event Request test 00:11:35.041 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:11:35.041 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:11:35.041 Registering asynchronous event callbacks... 00:11:35.041 Starting namespace attribute notice tests for all controllers... 00:11:35.041 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:35.041 aer_cb - Changed Namespace 00:11:35.041 Cleaning up... 00:11:35.041 [ 00:11:35.041 { 00:11:35.041 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:35.041 "subtype": "Discovery", 00:11:35.041 "listen_addresses": [], 00:11:35.041 "allow_any_host": true, 00:11:35.041 "hosts": [] 00:11:35.041 }, 00:11:35.041 { 00:11:35.041 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:35.041 "subtype": "NVMe", 00:11:35.041 "listen_addresses": [ 00:11:35.041 { 00:11:35.041 "trtype": "VFIOUSER", 00:11:35.041 "adrfam": "IPv4", 00:11:35.041 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:35.041 "trsvcid": "0" 00:11:35.041 } 00:11:35.041 ], 00:11:35.041 "allow_any_host": true, 00:11:35.041 "hosts": [], 00:11:35.041 "serial_number": "SPDK1", 00:11:35.041 "model_number": "SPDK bdev Controller", 00:11:35.041 "max_namespaces": 32, 00:11:35.041 "min_cntlid": 1, 00:11:35.041 "max_cntlid": 65519, 00:11:35.041 "namespaces": [ 00:11:35.041 { 00:11:35.041 "nsid": 1, 00:11:35.041 "bdev_name": "Malloc1", 00:11:35.041 "name": "Malloc1", 00:11:35.041 "nguid": "A6318B975A2D4ED487BAE0191DA7ACF5", 00:11:35.041 "uuid": "a6318b97-5a2d-4ed4-87ba-e0191da7acf5" 00:11:35.041 }, 00:11:35.041 { 00:11:35.041 "nsid": 2, 00:11:35.041 "bdev_name": "Malloc3", 00:11:35.041 "name": "Malloc3", 00:11:35.041 "nguid": "821EBFE27D464969B52FF245862467DA", 00:11:35.041 "uuid": "821ebfe2-7d46-4969-b52f-f245862467da" 00:11:35.041 } 00:11:35.041 ] 00:11:35.041 }, 00:11:35.041 { 00:11:35.041 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:35.041 "subtype": "NVMe", 00:11:35.041 "listen_addresses": [ 00:11:35.041 { 00:11:35.041 "trtype": "VFIOUSER", 00:11:35.041 "adrfam": "IPv4", 00:11:35.041 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:35.041 "trsvcid": "0" 00:11:35.041 } 00:11:35.041 ], 00:11:35.041 "allow_any_host": true, 00:11:35.041 "hosts": [], 00:11:35.041 "serial_number": "SPDK2", 00:11:35.041 "model_number": "SPDK bdev Controller", 00:11:35.041 "max_namespaces": 32, 00:11:35.041 "min_cntlid": 1, 00:11:35.041 "max_cntlid": 65519, 00:11:35.041 "namespaces": [ 00:11:35.041 { 00:11:35.041 "nsid": 1, 00:11:35.041 "bdev_name": "Malloc2", 00:11:35.041 "name": "Malloc2", 00:11:35.041 "nguid": "CDBD641E901B472C9FEE2E7682E891A8", 00:11:35.041 "uuid": "cdbd641e-901b-472c-9fee-2e7682e891a8" 00:11:35.041 } 00:11:35.041 ] 00:11:35.041 } 00:11:35.041 ] 00:11:35.042 15:48:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 704700 00:11:35.042 15:48:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:35.042 15:48:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:35.042 15:48:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:11:35.042 15:48:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:35.042 [2024-07-12 15:48:32.327236] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:11:35.042 [2024-07-12 15:48:32.327278] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid704837 ] 00:11:35.301 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.301 [2024-07-12 15:48:32.361906] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:11:35.301 [2024-07-12 15:48:32.370076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:35.301 [2024-07-12 15:48:32.370106] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f575e003000 00:11:35.301 [2024-07-12 15:48:32.371074] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.301 [2024-07-12 15:48:32.372077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.301 [2024-07-12 15:48:32.373086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.301 [2024-07-12 15:48:32.374091] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:35.301 [2024-07-12 15:48:32.375090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:35.301 [2024-07-12 15:48:32.376096] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.301 [2024-07-12 15:48:32.377103] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:35.301 [2024-07-12 15:48:32.378112] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:35.301 [2024-07-12 15:48:32.379127] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:35.301 [2024-07-12 15:48:32.379148] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f575dff8000 00:11:35.301 [2024-07-12 15:48:32.380301] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:35.301 [2024-07-12 15:48:32.396396] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:11:35.301 [2024-07-12 15:48:32.396432] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:11:35.301 [2024-07-12 15:48:32.400536] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:35.301 [2024-07-12 15:48:32.400588] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:35.301 [2024-07-12 15:48:32.400676] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:11:35.301 [2024-07-12 15:48:32.400704] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:11:35.301 [2024-07-12 15:48:32.400715] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:11:35.301 [2024-07-12 15:48:32.401541] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:11:35.301 [2024-07-12 15:48:32.401561] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:11:35.301 [2024-07-12 15:48:32.401574] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:11:35.301 [2024-07-12 15:48:32.402552] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:11:35.301 [2024-07-12 15:48:32.402572] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:11:35.301 [2024-07-12 15:48:32.402585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:11:35.301 [2024-07-12 15:48:32.403552] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:11:35.301 [2024-07-12 15:48:32.403572] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:35.301 [2024-07-12 15:48:32.404562] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:11:35.301 [2024-07-12 15:48:32.404582] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:11:35.301 [2024-07-12 15:48:32.404591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:11:35.301 [2024-07-12 15:48:32.404603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:35.301 [2024-07-12 15:48:32.404712] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:11:35.301 [2024-07-12 15:48:32.404720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:35.301 [2024-07-12 15:48:32.404727] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:11:35.301 [2024-07-12 15:48:32.405568] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:11:35.301 [2024-07-12 15:48:32.406571] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:11:35.301 [2024-07-12 15:48:32.407579] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:35.301 [2024-07-12 15:48:32.408573] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:35.301 [2024-07-12 15:48:32.408641] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:35.301 [2024-07-12 15:48:32.409594] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:11:35.301 [2024-07-12 15:48:32.409613] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:35.301 [2024-07-12 15:48:32.409623] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:11:35.301 [2024-07-12 15:48:32.409651] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:11:35.301 [2024-07-12 15:48:32.409668] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:11:35.301 [2024-07-12 15:48:32.409689] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:35.301 [2024-07-12 15:48:32.409699] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.301 [2024-07-12 15:48:32.409716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.301 [2024-07-12 15:48:32.414763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:35.301 [2024-07-12 15:48:32.414785] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:11:35.301 [2024-07-12 15:48:32.414793] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:11:35.301 [2024-07-12 15:48:32.414801] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:11:35.301 [2024-07-12 15:48:32.414808] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:35.301 [2024-07-12 15:48:32.414816] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:11:35.301 [2024-07-12 15:48:32.414824] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:11:35.301 [2024-07-12 15:48:32.414831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:11:35.301 [2024-07-12 15:48:32.414844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:11:35.301 [2024-07-12 15:48:32.414865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:35.301 [2024-07-12 15:48:32.422761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:35.301 [2024-07-12 15:48:32.422785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.301 [2024-07-12 15:48:32.422799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.301 [2024-07-12 15:48:32.422811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.301 [2024-07-12 15:48:32.422824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:35.301 [2024-07-12 15:48:32.422833] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:11:35.301 [2024-07-12 15:48:32.422849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:35.301 [2024-07-12 15:48:32.422865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:35.301 [2024-07-12 15:48:32.430763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:35.301 [2024-07-12 15:48:32.430781] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:11:35.301 [2024-07-12 15:48:32.430795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:35.301 [2024-07-12 15:48:32.430810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:11:35.301 [2024-07-12 15:48:32.430821] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:11:35.301 [2024-07-12 15:48:32.430835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:35.301 [2024-07-12 15:48:32.438761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:35.301 [2024-07-12 15:48:32.438836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:11:35.301 [2024-07-12 15:48:32.438854] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:11:35.301 [2024-07-12 15:48:32.438867] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:35.301 [2024-07-12 15:48:32.438875] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:35.301 [2024-07-12 15:48:32.438885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:35.301 [2024-07-12 15:48:32.446764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:35.301 [2024-07-12 15:48:32.446787] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:11:35.301 [2024-07-12 15:48:32.446806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:11:35.302 [2024-07-12 15:48:32.446820] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:11:35.302 [2024-07-12 15:48:32.446832] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:35.302 [2024-07-12 15:48:32.446841] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.302 [2024-07-12 15:48:32.446851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.302 [2024-07-12 15:48:32.454746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:35.302 [2024-07-12 15:48:32.454777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:35.302 [2024-07-12 15:48:32.454793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:35.302 [2024-07-12 15:48:32.454807] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:35.302 [2024-07-12 15:48:32.454816] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.302 [2024-07-12 15:48:32.454826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.302 [2024-07-12 15:48:32.462761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:35.302 [2024-07-12 15:48:32.462792] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:35.302 [2024-07-12 15:48:32.462810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:11:35.302 [2024-07-12 15:48:32.462824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:11:35.302 [2024-07-12 15:48:32.462834] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:11:35.302 [2024-07-12 15:48:32.462843] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:35.302 [2024-07-12 15:48:32.462851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:11:35.302 [2024-07-12 15:48:32.462859] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:11:35.302 [2024-07-12 15:48:32.462867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:11:35.302 [2024-07-12 15:48:32.462875] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:11:35.302 [2024-07-12 15:48:32.462900] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:35.302 [2024-07-12 15:48:32.470764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:35.302 [2024-07-12 15:48:32.470790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:35.302 [2024-07-12 15:48:32.478761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:35.302 [2024-07-12 15:48:32.478787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:35.302 [2024-07-12 15:48:32.486748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:35.302 [2024-07-12 15:48:32.486773] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:35.302 [2024-07-12 15:48:32.494763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:35.302 [2024-07-12 15:48:32.494795] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:35.302 [2024-07-12 15:48:32.494807] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:35.302 [2024-07-12 15:48:32.494813] nvme_pcie_common.c:1240:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:35.302 [2024-07-12 15:48:32.494819] nvme_pcie_common.c:1256:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:35.302 [2024-07-12 15:48:32.494829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:35.302 [2024-07-12 15:48:32.494841] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:35.302 [2024-07-12 15:48:32.494850] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:35.302 [2024-07-12 15:48:32.494859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:35.302 [2024-07-12 15:48:32.494870] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:35.302 [2024-07-12 15:48:32.494879] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:35.302 [2024-07-12 15:48:32.494888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:35.302 [2024-07-12 15:48:32.494904] nvme_pcie_common.c:1203:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:35.302 [2024-07-12 15:48:32.494913] nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:35.302 [2024-07-12 15:48:32.494922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:35.302 [2024-07-12 15:48:32.502763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:35.302 [2024-07-12 15:48:32.502791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:35.302 [2024-07-12 15:48:32.502809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:35.302 [2024-07-12 15:48:32.502821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:35.302 ===================================================== 00:11:35.302 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:35.302 ===================================================== 00:11:35.302 Controller Capabilities/Features 00:11:35.302 ================================ 00:11:35.302 Vendor ID: 4e58 00:11:35.302 Subsystem Vendor ID: 4e58 00:11:35.302 Serial Number: SPDK2 00:11:35.302 Model Number: SPDK bdev Controller 00:11:35.302 Firmware Version: 24.09 00:11:35.302 Recommended Arb Burst: 6 00:11:35.302 IEEE OUI Identifier: 8d 6b 50 00:11:35.302 Multi-path I/O 00:11:35.302 May have multiple subsystem ports: Yes 00:11:35.302 May have multiple controllers: Yes 00:11:35.302 Associated with SR-IOV VF: No 00:11:35.302 Max Data Transfer Size: 131072 00:11:35.302 Max Number of Namespaces: 32 00:11:35.302 Max Number of I/O Queues: 127 00:11:35.302 NVMe Specification Version (VS): 1.3 00:11:35.302 NVMe Specification Version (Identify): 1.3 00:11:35.302 Maximum Queue Entries: 256 00:11:35.302 Contiguous Queues Required: Yes 00:11:35.302 Arbitration Mechanisms Supported 00:11:35.302 Weighted Round Robin: Not Supported 00:11:35.302 Vendor Specific: Not Supported 00:11:35.302 Reset Timeout: 15000 ms 00:11:35.302 Doorbell Stride: 4 bytes 00:11:35.302 NVM Subsystem Reset: Not Supported 00:11:35.302 Command Sets Supported 00:11:35.302 NVM Command Set: Supported 00:11:35.302 Boot Partition: Not Supported 00:11:35.302 Memory Page Size Minimum: 4096 bytes 00:11:35.302 Memory Page Size Maximum: 4096 bytes 00:11:35.302 Persistent Memory Region: Not Supported 00:11:35.302 Optional Asynchronous Events Supported 00:11:35.302 Namespace Attribute Notices: Supported 00:11:35.302 Firmware Activation Notices: Not Supported 00:11:35.302 ANA Change Notices: Not Supported 00:11:35.302 PLE Aggregate Log Change Notices: Not Supported 00:11:35.302 LBA Status Info Alert Notices: Not Supported 00:11:35.302 EGE Aggregate Log Change Notices: Not Supported 00:11:35.302 Normal NVM Subsystem Shutdown event: Not Supported 00:11:35.302 Zone Descriptor Change Notices: Not Supported 00:11:35.302 Discovery Log Change Notices: Not Supported 00:11:35.302 Controller Attributes 00:11:35.302 128-bit Host Identifier: Supported 00:11:35.302 Non-Operational Permissive Mode: Not Supported 00:11:35.302 NVM Sets: Not Supported 00:11:35.302 Read Recovery Levels: Not Supported 00:11:35.302 Endurance Groups: Not Supported 00:11:35.302 Predictable Latency Mode: Not Supported 00:11:35.302 Traffic Based Keep ALive: Not Supported 00:11:35.302 Namespace Granularity: Not Supported 00:11:35.302 SQ Associations: Not Supported 00:11:35.302 UUID List: Not Supported 00:11:35.302 Multi-Domain Subsystem: Not Supported 00:11:35.302 Fixed Capacity Management: Not Supported 00:11:35.302 Variable Capacity Management: Not Supported 00:11:35.302 Delete Endurance Group: Not Supported 00:11:35.302 Delete NVM Set: Not Supported 00:11:35.302 Extended LBA Formats Supported: Not Supported 00:11:35.302 Flexible Data Placement Supported: Not Supported 00:11:35.302 00:11:35.302 Controller Memory Buffer Support 00:11:35.302 ================================ 00:11:35.302 Supported: No 00:11:35.302 00:11:35.302 Persistent Memory Region Support 00:11:35.302 ================================ 00:11:35.302 Supported: No 00:11:35.302 00:11:35.302 Admin Command Set Attributes 00:11:35.302 ============================ 00:11:35.302 Security Send/Receive: Not Supported 00:11:35.302 Format NVM: Not Supported 00:11:35.302 Firmware Activate/Download: Not Supported 00:11:35.302 Namespace Management: Not Supported 00:11:35.302 Device Self-Test: Not Supported 00:11:35.302 Directives: Not Supported 00:11:35.302 NVMe-MI: Not Supported 00:11:35.302 Virtualization Management: Not Supported 00:11:35.302 Doorbell Buffer Config: Not Supported 00:11:35.302 Get LBA Status Capability: Not Supported 00:11:35.302 Command & Feature Lockdown Capability: Not Supported 00:11:35.302 Abort Command Limit: 4 00:11:35.302 Async Event Request Limit: 4 00:11:35.302 Number of Firmware Slots: N/A 00:11:35.302 Firmware Slot 1 Read-Only: N/A 00:11:35.302 Firmware Activation Without Reset: N/A 00:11:35.302 Multiple Update Detection Support: N/A 00:11:35.302 Firmware Update Granularity: No Information Provided 00:11:35.302 Per-Namespace SMART Log: No 00:11:35.302 Asymmetric Namespace Access Log Page: Not Supported 00:11:35.302 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:11:35.303 Command Effects Log Page: Supported 00:11:35.303 Get Log Page Extended Data: Supported 00:11:35.303 Telemetry Log Pages: Not Supported 00:11:35.303 Persistent Event Log Pages: Not Supported 00:11:35.303 Supported Log Pages Log Page: May Support 00:11:35.303 Commands Supported & Effects Log Page: Not Supported 00:11:35.303 Feature Identifiers & Effects Log Page:May Support 00:11:35.303 NVMe-MI Commands & Effects Log Page: May Support 00:11:35.303 Data Area 4 for Telemetry Log: Not Supported 00:11:35.303 Error Log Page Entries Supported: 128 00:11:35.303 Keep Alive: Supported 00:11:35.303 Keep Alive Granularity: 10000 ms 00:11:35.303 00:11:35.303 NVM Command Set Attributes 00:11:35.303 ========================== 00:11:35.303 Submission Queue Entry Size 00:11:35.303 Max: 64 00:11:35.303 Min: 64 00:11:35.303 Completion Queue Entry Size 00:11:35.303 Max: 16 00:11:35.303 Min: 16 00:11:35.303 Number of Namespaces: 32 00:11:35.303 Compare Command: Supported 00:11:35.303 Write Uncorrectable Command: Not Supported 00:11:35.303 Dataset Management Command: Supported 00:11:35.303 Write Zeroes Command: Supported 00:11:35.303 Set Features Save Field: Not Supported 00:11:35.303 Reservations: Not Supported 00:11:35.303 Timestamp: Not Supported 00:11:35.303 Copy: Supported 00:11:35.303 Volatile Write Cache: Present 00:11:35.303 Atomic Write Unit (Normal): 1 00:11:35.303 Atomic Write Unit (PFail): 1 00:11:35.303 Atomic Compare & Write Unit: 1 00:11:35.303 Fused Compare & Write: Supported 00:11:35.303 Scatter-Gather List 00:11:35.303 SGL Command Set: Supported (Dword aligned) 00:11:35.303 SGL Keyed: Not Supported 00:11:35.303 SGL Bit Bucket Descriptor: Not Supported 00:11:35.303 SGL Metadata Pointer: Not Supported 00:11:35.303 Oversized SGL: Not Supported 00:11:35.303 SGL Metadata Address: Not Supported 00:11:35.303 SGL Offset: Not Supported 00:11:35.303 Transport SGL Data Block: Not Supported 00:11:35.303 Replay Protected Memory Block: Not Supported 00:11:35.303 00:11:35.303 Firmware Slot Information 00:11:35.303 ========================= 00:11:35.303 Active slot: 1 00:11:35.303 Slot 1 Firmware Revision: 24.09 00:11:35.303 00:11:35.303 00:11:35.303 Commands Supported and Effects 00:11:35.303 ============================== 00:11:35.303 Admin Commands 00:11:35.303 -------------- 00:11:35.303 Get Log Page (02h): Supported 00:11:35.303 Identify (06h): Supported 00:11:35.303 Abort (08h): Supported 00:11:35.303 Set Features (09h): Supported 00:11:35.303 Get Features (0Ah): Supported 00:11:35.303 Asynchronous Event Request (0Ch): Supported 00:11:35.303 Keep Alive (18h): Supported 00:11:35.303 I/O Commands 00:11:35.303 ------------ 00:11:35.303 Flush (00h): Supported LBA-Change 00:11:35.303 Write (01h): Supported LBA-Change 00:11:35.303 Read (02h): Supported 00:11:35.303 Compare (05h): Supported 00:11:35.303 Write Zeroes (08h): Supported LBA-Change 00:11:35.303 Dataset Management (09h): Supported LBA-Change 00:11:35.303 Copy (19h): Supported LBA-Change 00:11:35.303 00:11:35.303 Error Log 00:11:35.303 ========= 00:11:35.303 00:11:35.303 Arbitration 00:11:35.303 =========== 00:11:35.303 Arbitration Burst: 1 00:11:35.303 00:11:35.303 Power Management 00:11:35.303 ================ 00:11:35.303 Number of Power States: 1 00:11:35.303 Current Power State: Power State #0 00:11:35.303 Power State #0: 00:11:35.303 Max Power: 0.00 W 00:11:35.303 Non-Operational State: Operational 00:11:35.303 Entry Latency: Not Reported 00:11:35.303 Exit Latency: Not Reported 00:11:35.303 Relative Read Throughput: 0 00:11:35.303 Relative Read Latency: 0 00:11:35.303 Relative Write Throughput: 0 00:11:35.303 Relative Write Latency: 0 00:11:35.303 Idle Power: Not Reported 00:11:35.303 Active Power: Not Reported 00:11:35.303 Non-Operational Permissive Mode: Not Supported 00:11:35.303 00:11:35.303 Health Information 00:11:35.303 ================== 00:11:35.303 Critical Warnings: 00:11:35.303 Available Spare Space: OK 00:11:35.303 Temperature: OK 00:11:35.303 Device Reliability: OK 00:11:35.303 Read Only: No 00:11:35.303 Volatile Memory Backup: OK 00:11:35.303 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:35.303 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:35.303 Available Spare: 0% 00:11:35.303 Available Sp[2024-07-12 15:48:32.502938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:35.303 [2024-07-12 15:48:32.510765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:35.303 [2024-07-12 15:48:32.510812] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:11:35.303 [2024-07-12 15:48:32.510830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.303 [2024-07-12 15:48:32.510841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.303 [2024-07-12 15:48:32.510851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.303 [2024-07-12 15:48:32.510861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.303 [2024-07-12 15:48:32.510951] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:11:35.303 [2024-07-12 15:48:32.510973] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:11:35.303 [2024-07-12 15:48:32.511954] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:35.303 [2024-07-12 15:48:32.512024] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:11:35.303 [2024-07-12 15:48:32.512038] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:11:35.303 [2024-07-12 15:48:32.514762] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:11:35.303 [2024-07-12 15:48:32.514787] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 2 milliseconds 00:11:35.303 [2024-07-12 15:48:32.514839] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:11:35.303 [2024-07-12 15:48:32.516026] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:35.303 are Threshold: 0% 00:11:35.303 Life Percentage Used: 0% 00:11:35.303 Data Units Read: 0 00:11:35.303 Data Units Written: 0 00:11:35.303 Host Read Commands: 0 00:11:35.303 Host Write Commands: 0 00:11:35.303 Controller Busy Time: 0 minutes 00:11:35.303 Power Cycles: 0 00:11:35.303 Power On Hours: 0 hours 00:11:35.303 Unsafe Shutdowns: 0 00:11:35.303 Unrecoverable Media Errors: 0 00:11:35.303 Lifetime Error Log Entries: 0 00:11:35.303 Warning Temperature Time: 0 minutes 00:11:35.303 Critical Temperature Time: 0 minutes 00:11:35.303 00:11:35.303 Number of Queues 00:11:35.303 ================ 00:11:35.303 Number of I/O Submission Queues: 127 00:11:35.303 Number of I/O Completion Queues: 127 00:11:35.303 00:11:35.303 Active Namespaces 00:11:35.303 ================= 00:11:35.303 Namespace ID:1 00:11:35.303 Error Recovery Timeout: Unlimited 00:11:35.303 Command Set Identifier: NVM (00h) 00:11:35.303 Deallocate: Supported 00:11:35.303 Deallocated/Unwritten Error: Not Supported 00:11:35.303 Deallocated Read Value: Unknown 00:11:35.303 Deallocate in Write Zeroes: Not Supported 00:11:35.303 Deallocated Guard Field: 0xFFFF 00:11:35.303 Flush: Supported 00:11:35.303 Reservation: Supported 00:11:35.303 Namespace Sharing Capabilities: Multiple Controllers 00:11:35.303 Size (in LBAs): 131072 (0GiB) 00:11:35.303 Capacity (in LBAs): 131072 (0GiB) 00:11:35.303 Utilization (in LBAs): 131072 (0GiB) 00:11:35.303 NGUID: CDBD641E901B472C9FEE2E7682E891A8 00:11:35.303 UUID: cdbd641e-901b-472c-9fee-2e7682e891a8 00:11:35.303 Thin Provisioning: Not Supported 00:11:35.303 Per-NS Atomic Units: Yes 00:11:35.303 Atomic Boundary Size (Normal): 0 00:11:35.303 Atomic Boundary Size (PFail): 0 00:11:35.303 Atomic Boundary Offset: 0 00:11:35.303 Maximum Single Source Range Length: 65535 00:11:35.303 Maximum Copy Length: 65535 00:11:35.303 Maximum Source Range Count: 1 00:11:35.303 NGUID/EUI64 Never Reused: No 00:11:35.303 Namespace Write Protected: No 00:11:35.303 Number of LBA Formats: 1 00:11:35.303 Current LBA Format: LBA Format #00 00:11:35.303 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:35.303 00:11:35.303 15:48:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:35.303 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.561 [2024-07-12 15:48:32.749510] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:40.821 Initializing NVMe Controllers 00:11:40.821 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:40.821 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:40.821 Initialization complete. Launching workers. 00:11:40.821 ======================================================== 00:11:40.821 Latency(us) 00:11:40.822 Device Information : IOPS MiB/s Average min max 00:11:40.822 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34911.71 136.37 3665.74 1149.82 7309.23 00:11:40.822 ======================================================== 00:11:40.822 Total : 34911.71 136.37 3665.74 1149.82 7309.23 00:11:40.822 00:11:40.822 [2024-07-12 15:48:37.854098] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:40.822 15:48:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:11:40.822 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.822 [2024-07-12 15:48:38.094773] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:46.078 Initializing NVMe Controllers 00:11:46.078 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:46.078 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:11:46.078 Initialization complete. Launching workers. 00:11:46.078 ======================================================== 00:11:46.078 Latency(us) 00:11:46.078 Device Information : IOPS MiB/s Average min max 00:11:46.078 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32887.90 128.47 3893.29 1192.87 9861.31 00:11:46.078 ======================================================== 00:11:46.078 Total : 32887.90 128.47 3893.29 1192.87 9861.31 00:11:46.078 00:11:46.078 [2024-07-12 15:48:43.117179] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:46.078 15:48:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:11:46.078 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.078 [2024-07-12 15:48:43.335543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:51.336 [2024-07-12 15:48:48.469889] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:51.336 Initializing NVMe Controllers 00:11:51.336 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:51.336 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:11:51.336 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:11:51.336 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:11:51.336 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:11:51.336 Initialization complete. Launching workers. 00:11:51.336 Starting thread on core 2 00:11:51.336 Starting thread on core 3 00:11:51.336 Starting thread on core 1 00:11:51.336 15:48:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:11:51.336 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.594 [2024-07-12 15:48:48.778489] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:54.874 [2024-07-12 15:48:51.842300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:54.875 Initializing NVMe Controllers 00:11:54.875 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:54.875 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:54.875 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:11:54.875 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:11:54.875 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:11:54.875 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:11:54.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:54.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:11:54.875 Initialization complete. Launching workers. 00:11:54.875 Starting thread on core 1 with urgent priority queue 00:11:54.875 Starting thread on core 2 with urgent priority queue 00:11:54.875 Starting thread on core 3 with urgent priority queue 00:11:54.875 Starting thread on core 0 with urgent priority queue 00:11:54.875 SPDK bdev Controller (SPDK2 ) core 0: 7027.33 IO/s 14.23 secs/100000 ios 00:11:54.875 SPDK bdev Controller (SPDK2 ) core 1: 4756.33 IO/s 21.02 secs/100000 ios 00:11:54.875 SPDK bdev Controller (SPDK2 ) core 2: 6848.67 IO/s 14.60 secs/100000 ios 00:11:54.875 SPDK bdev Controller (SPDK2 ) core 3: 5949.67 IO/s 16.81 secs/100000 ios 00:11:54.875 ======================================================== 00:11:54.875 00:11:54.875 15:48:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:54.875 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.875 [2024-07-12 15:48:52.149213] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:54.875 Initializing NVMe Controllers 00:11:54.875 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:54.875 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:54.875 Namespace ID: 1 size: 0GB 00:11:54.875 Initialization complete. 00:11:54.875 INFO: using host memory buffer for IO 00:11:54.875 Hello world! 00:11:54.875 [2024-07-12 15:48:52.158273] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:55.132 15:48:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:11:55.132 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.389 [2024-07-12 15:48:52.452307] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:56.344 Initializing NVMe Controllers 00:11:56.344 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:56.344 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:56.344 Initialization complete. Launching workers. 00:11:56.344 submit (in ns) avg, min, max = 8230.6, 3525.6, 4015072.2 00:11:56.344 complete (in ns) avg, min, max = 27686.8, 2067.8, 7060521.1 00:11:56.344 00:11:56.344 Submit histogram 00:11:56.344 ================ 00:11:56.344 Range in us Cumulative Count 00:11:56.344 3.508 - 3.532: 0.0152% ( 2) 00:11:56.344 3.532 - 3.556: 0.2506% ( 31) 00:11:56.344 3.556 - 3.579: 0.9266% ( 89) 00:11:56.344 3.579 - 3.603: 2.8634% ( 255) 00:11:56.344 3.603 - 3.627: 6.7978% ( 518) 00:11:56.344 3.627 - 3.650: 13.3830% ( 867) 00:11:56.344 3.650 - 3.674: 20.8568% ( 984) 00:11:56.344 3.674 - 3.698: 30.2066% ( 1231) 00:11:56.344 3.698 - 3.721: 38.4095% ( 1080) 00:11:56.344 3.721 - 3.745: 45.3820% ( 918) 00:11:56.344 3.745 - 3.769: 50.8355% ( 718) 00:11:56.344 3.769 - 3.793: 55.2712% ( 584) 00:11:56.344 3.793 - 3.816: 58.5675% ( 434) 00:11:56.344 3.816 - 3.840: 62.0690% ( 461) 00:11:56.344 3.840 - 3.864: 65.7907% ( 490) 00:11:56.344 3.864 - 3.887: 69.6187% ( 504) 00:11:56.344 3.887 - 3.911: 73.8645% ( 559) 00:11:56.344 3.911 - 3.935: 78.5280% ( 614) 00:11:56.344 3.935 - 3.959: 82.5915% ( 535) 00:11:56.344 3.959 - 3.982: 85.3714% ( 366) 00:11:56.344 3.982 - 4.006: 87.4753% ( 277) 00:11:56.345 4.006 - 4.030: 89.1007% ( 214) 00:11:56.345 4.030 - 4.053: 90.4451% ( 177) 00:11:56.345 4.053 - 4.077: 91.5616% ( 147) 00:11:56.345 4.077 - 4.101: 92.5566% ( 131) 00:11:56.345 4.101 - 4.124: 93.4680% ( 120) 00:11:56.345 4.124 - 4.148: 94.4478% ( 129) 00:11:56.345 4.148 - 4.172: 95.3289% ( 116) 00:11:56.345 4.172 - 4.196: 95.9061% ( 76) 00:11:56.345 4.196 - 4.219: 96.2175% ( 41) 00:11:56.345 4.219 - 4.243: 96.5213% ( 40) 00:11:56.345 4.243 - 4.267: 96.7264% ( 27) 00:11:56.345 4.267 - 4.290: 96.8631% ( 18) 00:11:56.345 4.290 - 4.314: 96.9771% ( 15) 00:11:56.345 4.314 - 4.338: 97.0910% ( 15) 00:11:56.345 4.338 - 4.361: 97.1897% ( 13) 00:11:56.345 4.361 - 4.385: 97.2429% ( 7) 00:11:56.345 4.385 - 4.409: 97.2809% ( 5) 00:11:56.345 4.409 - 4.433: 97.3568% ( 10) 00:11:56.345 4.433 - 4.456: 97.4024% ( 6) 00:11:56.345 4.456 - 4.480: 97.4176% ( 2) 00:11:56.345 4.480 - 4.504: 97.4328% ( 2) 00:11:56.345 4.504 - 4.527: 97.4480% ( 2) 00:11:56.345 4.527 - 4.551: 97.4632% ( 2) 00:11:56.345 4.551 - 4.575: 97.4708% ( 1) 00:11:56.345 4.575 - 4.599: 97.4784% ( 1) 00:11:56.345 4.646 - 4.670: 97.4935% ( 2) 00:11:56.345 4.670 - 4.693: 97.5011% ( 1) 00:11:56.345 4.693 - 4.717: 97.5087% ( 1) 00:11:56.345 4.717 - 4.741: 97.5239% ( 2) 00:11:56.345 4.741 - 4.764: 97.5543% ( 4) 00:11:56.345 4.764 - 4.788: 97.5771% ( 3) 00:11:56.345 4.788 - 4.812: 97.5923% ( 2) 00:11:56.345 4.812 - 4.836: 97.6379% ( 6) 00:11:56.345 4.836 - 4.859: 97.6606% ( 3) 00:11:56.345 4.859 - 4.883: 97.6910% ( 4) 00:11:56.345 4.883 - 4.907: 97.7138% ( 3) 00:11:56.345 4.907 - 4.930: 97.7518% ( 5) 00:11:56.345 4.930 - 4.954: 97.8201% ( 9) 00:11:56.345 4.954 - 4.978: 97.8429% ( 3) 00:11:56.345 4.978 - 5.001: 97.8733% ( 4) 00:11:56.345 5.001 - 5.025: 97.9113% ( 5) 00:11:56.345 5.025 - 5.049: 97.9645% ( 7) 00:11:56.345 5.049 - 5.073: 98.0024% ( 5) 00:11:56.345 5.073 - 5.096: 98.0252% ( 3) 00:11:56.345 5.096 - 5.120: 98.0328% ( 1) 00:11:56.345 5.120 - 5.144: 98.0708% ( 5) 00:11:56.345 5.144 - 5.167: 98.1240% ( 7) 00:11:56.345 5.167 - 5.191: 98.1695% ( 6) 00:11:56.345 5.191 - 5.215: 98.2075% ( 5) 00:11:56.345 5.215 - 5.239: 98.2531% ( 6) 00:11:56.345 5.239 - 5.262: 98.2683% ( 2) 00:11:56.345 5.262 - 5.286: 98.2759% ( 1) 00:11:56.345 5.286 - 5.310: 98.2835% ( 1) 00:11:56.345 5.310 - 5.333: 98.2911% ( 1) 00:11:56.345 5.333 - 5.357: 98.3062% ( 2) 00:11:56.345 5.357 - 5.381: 98.3290% ( 3) 00:11:56.345 5.381 - 5.404: 98.3366% ( 1) 00:11:56.345 5.428 - 5.452: 98.3442% ( 1) 00:11:56.345 5.523 - 5.547: 98.3518% ( 1) 00:11:56.345 5.570 - 5.594: 98.3594% ( 1) 00:11:56.345 5.618 - 5.641: 98.3746% ( 2) 00:11:56.345 5.641 - 5.665: 98.3822% ( 1) 00:11:56.345 5.665 - 5.689: 98.3898% ( 1) 00:11:56.345 5.713 - 5.736: 98.3974% ( 1) 00:11:56.345 5.784 - 5.807: 98.4050% ( 1) 00:11:56.345 5.950 - 5.973: 98.4126% ( 1) 00:11:56.345 6.637 - 6.684: 98.4278% ( 2) 00:11:56.345 6.684 - 6.732: 98.4354% ( 1) 00:11:56.345 6.827 - 6.874: 98.4430% ( 1) 00:11:56.345 6.874 - 6.921: 98.4506% ( 1) 00:11:56.345 6.921 - 6.969: 98.4657% ( 2) 00:11:56.345 7.111 - 7.159: 98.4885% ( 3) 00:11:56.345 7.159 - 7.206: 98.4961% ( 1) 00:11:56.345 7.206 - 7.253: 98.5113% ( 2) 00:11:56.345 7.301 - 7.348: 98.5189% ( 1) 00:11:56.345 7.348 - 7.396: 98.5265% ( 1) 00:11:56.345 7.396 - 7.443: 98.5417% ( 2) 00:11:56.345 7.490 - 7.538: 98.5569% ( 2) 00:11:56.345 7.538 - 7.585: 98.5645% ( 1) 00:11:56.345 7.585 - 7.633: 98.5873% ( 3) 00:11:56.345 7.680 - 7.727: 98.5949% ( 1) 00:11:56.345 7.822 - 7.870: 98.6101% ( 2) 00:11:56.345 7.917 - 7.964: 98.6177% ( 1) 00:11:56.345 8.012 - 8.059: 98.6328% ( 2) 00:11:56.345 8.059 - 8.107: 98.6404% ( 1) 00:11:56.345 8.107 - 8.154: 98.6480% ( 1) 00:11:56.345 8.154 - 8.201: 98.6556% ( 1) 00:11:56.345 8.201 - 8.249: 98.6632% ( 1) 00:11:56.345 8.249 - 8.296: 98.6708% ( 1) 00:11:56.345 8.296 - 8.344: 98.6784% ( 1) 00:11:56.345 8.391 - 8.439: 98.7012% ( 3) 00:11:56.345 8.439 - 8.486: 98.7088% ( 1) 00:11:56.345 8.486 - 8.533: 98.7164% ( 1) 00:11:56.345 8.628 - 8.676: 98.7240% ( 1) 00:11:56.345 8.865 - 8.913: 98.7316% ( 1) 00:11:56.345 8.913 - 8.960: 98.7392% ( 1) 00:11:56.345 8.960 - 9.007: 98.7468% ( 1) 00:11:56.345 9.244 - 9.292: 98.7544% ( 1) 00:11:56.345 9.292 - 9.339: 98.7620% ( 1) 00:11:56.345 9.481 - 9.529: 98.7696% ( 1) 00:11:56.345 9.766 - 9.813: 98.7772% ( 1) 00:11:56.345 9.813 - 9.861: 98.7923% ( 2) 00:11:56.345 10.050 - 10.098: 98.7999% ( 1) 00:11:56.345 10.098 - 10.145: 98.8151% ( 2) 00:11:56.345 10.572 - 10.619: 98.8227% ( 1) 00:11:56.345 10.809 - 10.856: 98.8303% ( 1) 00:11:56.345 10.951 - 10.999: 98.8379% ( 1) 00:11:56.345 11.093 - 11.141: 98.8455% ( 1) 00:11:56.345 11.141 - 11.188: 98.8607% ( 2) 00:11:56.345 11.188 - 11.236: 98.8683% ( 1) 00:11:56.345 11.283 - 11.330: 98.8759% ( 1) 00:11:56.345 11.378 - 11.425: 98.8835% ( 1) 00:11:56.345 11.615 - 11.662: 98.8911% ( 1) 00:11:56.345 11.662 - 11.710: 98.8987% ( 1) 00:11:56.345 11.899 - 11.947: 98.9063% ( 1) 00:11:56.345 11.947 - 11.994: 98.9139% ( 1) 00:11:56.345 12.089 - 12.136: 98.9215% ( 1) 00:11:56.345 12.421 - 12.516: 98.9291% ( 1) 00:11:56.345 12.610 - 12.705: 98.9367% ( 1) 00:11:56.345 12.800 - 12.895: 98.9443% ( 1) 00:11:56.345 13.084 - 13.179: 98.9518% ( 1) 00:11:56.345 13.559 - 13.653: 98.9594% ( 1) 00:11:56.345 13.938 - 14.033: 98.9670% ( 1) 00:11:56.345 14.222 - 14.317: 98.9746% ( 1) 00:11:56.345 14.412 - 14.507: 98.9822% ( 1) 00:11:56.345 14.507 - 14.601: 98.9898% ( 1) 00:11:56.345 14.601 - 14.696: 98.9974% ( 1) 00:11:56.345 14.791 - 14.886: 99.0050% ( 1) 00:11:56.345 15.076 - 15.170: 99.0126% ( 1) 00:11:56.345 15.455 - 15.550: 99.0202% ( 1) 00:11:56.345 17.161 - 17.256: 99.0354% ( 2) 00:11:56.345 17.256 - 17.351: 99.0430% ( 1) 00:11:56.345 17.351 - 17.446: 99.0962% ( 7) 00:11:56.345 17.446 - 17.541: 99.1113% ( 2) 00:11:56.345 17.541 - 17.636: 99.1265% ( 2) 00:11:56.345 17.636 - 17.730: 99.1493% ( 3) 00:11:56.345 17.730 - 17.825: 99.1721% ( 3) 00:11:56.345 17.825 - 17.920: 99.2481% ( 10) 00:11:56.345 17.920 - 18.015: 99.3088% ( 8) 00:11:56.345 18.015 - 18.110: 99.3392% ( 4) 00:11:56.345 18.110 - 18.204: 99.3620% ( 3) 00:11:56.345 18.204 - 18.299: 99.4455% ( 11) 00:11:56.345 18.299 - 18.394: 99.5063% ( 8) 00:11:56.345 18.394 - 18.489: 99.5595% ( 7) 00:11:56.345 18.489 - 18.584: 99.6278% ( 9) 00:11:56.345 18.584 - 18.679: 99.6506% ( 3) 00:11:56.345 18.679 - 18.773: 99.6962% ( 6) 00:11:56.345 18.773 - 18.868: 99.7418% ( 6) 00:11:56.345 18.963 - 19.058: 99.7721% ( 4) 00:11:56.345 19.058 - 19.153: 99.7949% ( 3) 00:11:56.345 19.153 - 19.247: 99.8177% ( 3) 00:11:56.345 19.437 - 19.532: 99.8253% ( 1) 00:11:56.345 19.532 - 19.627: 99.8329% ( 1) 00:11:56.345 19.721 - 19.816: 99.8405% ( 1) 00:11:56.345 19.816 - 19.911: 99.8481% ( 1) 00:11:56.345 19.911 - 20.006: 99.8557% ( 1) 00:11:56.345 20.196 - 20.290: 99.8633% ( 1) 00:11:56.345 20.480 - 20.575: 99.8709% ( 1) 00:11:56.345 21.523 - 21.618: 99.8785% ( 1) 00:11:56.345 21.902 - 21.997: 99.8937% ( 2) 00:11:56.345 3980.705 - 4004.978: 99.9772% ( 11) 00:11:56.345 4004.978 - 4029.250: 100.0000% ( 3) 00:11:56.345 00:11:56.345 Complete histogram 00:11:56.345 ================== 00:11:56.345 Range in us Cumulative Count 00:11:56.345 2.062 - 2.074: 1.5419% ( 203) 00:11:56.345 2.074 - 2.086: 19.1478% ( 2318) 00:11:56.345 2.086 - 2.098: 22.4442% ( 434) 00:11:56.345 2.098 - 2.110: 35.6145% ( 1734) 00:11:56.345 2.110 - 2.121: 55.3395% ( 2597) 00:11:56.345 2.121 - 2.133: 57.7396% ( 316) 00:11:56.345 2.133 - 2.145: 61.5601% ( 503) 00:11:56.345 2.145 - 2.157: 66.0489% ( 591) 00:11:56.345 2.157 - 2.169: 66.8160% ( 101) 00:11:56.345 2.169 - 2.181: 72.6037% ( 762) 00:11:56.345 2.181 - 2.193: 77.7381% ( 676) 00:11:56.345 2.193 - 2.204: 78.6420% ( 119) 00:11:56.345 2.204 - 2.216: 80.2218% ( 208) 00:11:56.345 2.216 - 2.228: 83.3662% ( 414) 00:11:56.345 2.228 - 2.240: 84.9537% ( 209) 00:11:56.345 2.240 - 2.252: 88.5387% ( 472) 00:11:56.345 2.252 - 2.264: 92.2604% ( 490) 00:11:56.345 2.264 - 2.276: 93.3237% ( 140) 00:11:56.345 2.276 - 2.287: 93.8934% ( 75) 00:11:56.345 2.287 - 2.299: 94.3339% ( 58) 00:11:56.345 2.299 - 2.311: 94.7896% ( 60) 00:11:56.345 2.311 - 2.323: 95.1618% ( 49) 00:11:56.345 2.323 - 2.335: 95.2909% ( 17) 00:11:56.345 2.335 - 2.347: 95.5188% ( 30) 00:11:56.345 2.347 - 2.359: 95.6555% ( 18) 00:11:56.345 2.359 - 2.370: 95.8074% ( 20) 00:11:56.345 2.370 - 2.382: 95.9593% ( 20) 00:11:56.345 2.382 - 2.394: 96.2023% ( 32) 00:11:56.345 2.394 - 2.406: 96.4530% ( 33) 00:11:56.345 2.406 - 2.418: 96.7720% ( 42) 00:11:56.345 2.418 - 2.430: 97.1290% ( 47) 00:11:56.345 2.430 - 2.441: 97.3796% ( 33) 00:11:56.345 2.441 - 2.453: 97.5543% ( 23) 00:11:56.345 2.453 - 2.465: 97.6606% ( 14) 00:11:56.345 2.465 - 2.477: 97.7746% ( 15) 00:11:56.345 2.477 - 2.489: 97.8961% ( 16) 00:11:56.345 2.489 - 2.501: 98.0404% ( 19) 00:11:56.345 2.501 - 2.513: 98.1543% ( 15) 00:11:56.345 2.513 - 2.524: 98.2075% ( 7) 00:11:56.345 2.524 - 2.536: 98.2455% ( 5) 00:11:56.345 2.536 - 2.548: 98.2759% ( 4) 00:11:56.345 2.548 - 2.560: 98.3214% ( 6) 00:11:56.345 2.572 - 2.584: 98.3594% ( 5) 00:11:56.345 2.584 - 2.596: 98.3670% ( 1) 00:11:56.345 2.596 - 2.607: 98.3746% ( 1) 00:11:56.345 2.619 - 2.631: 98.3822% ( 1) 00:11:56.345 2.631 - 2.643: 98.3898% ( 1) 00:11:56.345 2.643 - 2.655: 98.3974% ( 1) 00:11:56.345 2.655 - 2.667: 98.4050% ( 1) 00:11:56.345 2.667 - 2.679: 98.4202% ( 2) 00:11:56.345 2.679 - 2.690: 98.4278% ( 1) 00:11:56.345 2.690 - 2.702: 98.4354% ( 1) 00:11:56.345 2.726 - 2.738: 98.4430% ( 1) 00:11:56.345 2.761 - 2.773: 98.4506% ( 1) 00:11:56.345 2.809 - 2.821: 98.4657% ( 2) 00:11:56.345 2.844 - 2.856: 98.4733% ( 1) 00:11:56.345 2.868 - 2.880: 98.4809% ( 1) 00:11:56.345 2.892 - 2.904: 98.4885% ( 1) 00:11:56.345 2.927 - 2.939: 98.4961% ( 1) 00:11:56.345 2.939 - 2.951: 98.5037% ( 1) 00:11:56.345 2.951 - 2.963: 98.5113% ( 1) 00:11:56.345 3.022 - 3.034: 98.5189% ( 1) 00:11:56.345 3.390 - 3.413: 98.5265% ( 1) 00:11:56.345 3.413 - 3.437: 98.5341% ( 1) 00:11:56.345 3.437 - 3.461: 98.5417% ( 1) 00:11:56.345 3.484 - 3.508: 98.5493% ( 1) 00:11:56.345 3.508 - 3.532: 98.5569% ( 1) 00:11:56.345 3.532 - 3.556: 98.5645% ( 1) 00:11:56.345 3.556 - 3.579: 9[2024-07-12 15:48:53.546484] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:56.345 8.5721% ( 1) 00:11:56.345 3.579 - 3.603: 98.5949% ( 3) 00:11:56.345 3.627 - 3.650: 98.6025% ( 1) 00:11:56.345 3.650 - 3.674: 98.6101% ( 1) 00:11:56.345 3.721 - 3.745: 98.6177% ( 1) 00:11:56.345 3.793 - 3.816: 98.6252% ( 1) 00:11:56.345 3.816 - 3.840: 98.6404% ( 2) 00:11:56.345 3.935 - 3.959: 98.6556% ( 2) 00:11:56.345 4.006 - 4.030: 98.6632% ( 1) 00:11:56.345 4.243 - 4.267: 98.6708% ( 1) 00:11:56.346 5.357 - 5.381: 98.6784% ( 1) 00:11:56.346 5.452 - 5.476: 98.6860% ( 1) 00:11:56.346 5.476 - 5.499: 98.6936% ( 1) 00:11:56.346 5.499 - 5.523: 98.7088% ( 2) 00:11:56.346 5.547 - 5.570: 98.7164% ( 1) 00:11:56.346 5.807 - 5.831: 98.7240% ( 1) 00:11:56.346 5.879 - 5.902: 98.7316% ( 1) 00:11:56.346 5.902 - 5.926: 98.7468% ( 2) 00:11:56.346 5.997 - 6.021: 98.7544% ( 1) 00:11:56.346 6.068 - 6.116: 98.7620% ( 1) 00:11:56.346 6.210 - 6.258: 98.7696% ( 1) 00:11:56.346 6.258 - 6.305: 98.7772% ( 1) 00:11:56.346 6.447 - 6.495: 98.7923% ( 2) 00:11:56.346 6.684 - 6.732: 98.7999% ( 1) 00:11:56.346 6.732 - 6.779: 98.8151% ( 2) 00:11:56.346 6.779 - 6.827: 98.8227% ( 1) 00:11:56.346 6.827 - 6.874: 98.8303% ( 1) 00:11:56.346 7.111 - 7.159: 98.8379% ( 1) 00:11:56.346 7.206 - 7.253: 98.8455% ( 1) 00:11:56.346 7.253 - 7.301: 98.8531% ( 1) 00:11:56.346 7.633 - 7.680: 98.8607% ( 1) 00:11:56.346 15.550 - 15.644: 98.8759% ( 2) 00:11:56.346 15.644 - 15.739: 98.8835% ( 1) 00:11:56.346 15.834 - 15.929: 98.9063% ( 3) 00:11:56.346 15.929 - 16.024: 98.9367% ( 4) 00:11:56.346 16.024 - 16.119: 98.9594% ( 3) 00:11:56.346 16.119 - 16.213: 98.9822% ( 3) 00:11:56.346 16.213 - 16.308: 99.0050% ( 3) 00:11:56.346 16.308 - 16.403: 99.0202% ( 2) 00:11:56.346 16.403 - 16.498: 99.0582% ( 5) 00:11:56.346 16.498 - 16.593: 99.0810% ( 3) 00:11:56.346 16.593 - 16.687: 99.1341% ( 7) 00:11:56.346 16.687 - 16.782: 99.1569% ( 3) 00:11:56.346 16.782 - 16.877: 99.1797% ( 3) 00:11:56.346 16.877 - 16.972: 99.2177% ( 5) 00:11:56.346 16.972 - 17.067: 99.2405% ( 3) 00:11:56.346 17.067 - 17.161: 99.2481% ( 1) 00:11:56.346 17.161 - 17.256: 99.2633% ( 2) 00:11:56.346 17.256 - 17.351: 99.2936% ( 4) 00:11:56.346 17.446 - 17.541: 99.3088% ( 2) 00:11:56.346 17.730 - 17.825: 99.3164% ( 1) 00:11:56.346 17.825 - 17.920: 99.3240% ( 1) 00:11:56.346 18.015 - 18.110: 99.3392% ( 2) 00:11:56.346 18.110 - 18.204: 99.3544% ( 2) 00:11:56.346 18.963 - 19.058: 99.3620% ( 1) 00:11:56.346 21.713 - 21.807: 99.3696% ( 1) 00:11:56.346 3980.705 - 4004.978: 99.8633% ( 65) 00:11:56.346 4004.978 - 4029.250: 99.9924% ( 17) 00:11:56.346 7039.052 - 7087.597: 100.0000% ( 1) 00:11:56.346 00:11:56.346 15:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:11:56.346 15:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:11:56.346 15:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:11:56.346 15:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:11:56.346 15:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:56.603 [ 00:11:56.603 { 00:11:56.603 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:56.603 "subtype": "Discovery", 00:11:56.603 "listen_addresses": [], 00:11:56.603 "allow_any_host": true, 00:11:56.603 "hosts": [] 00:11:56.603 }, 00:11:56.603 { 00:11:56.603 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:56.603 "subtype": "NVMe", 00:11:56.603 "listen_addresses": [ 00:11:56.603 { 00:11:56.603 "trtype": "VFIOUSER", 00:11:56.603 "adrfam": "IPv4", 00:11:56.603 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:56.603 "trsvcid": "0" 00:11:56.603 } 00:11:56.603 ], 00:11:56.603 "allow_any_host": true, 00:11:56.603 "hosts": [], 00:11:56.603 "serial_number": "SPDK1", 00:11:56.603 "model_number": "SPDK bdev Controller", 00:11:56.603 "max_namespaces": 32, 00:11:56.603 "min_cntlid": 1, 00:11:56.603 "max_cntlid": 65519, 00:11:56.603 "namespaces": [ 00:11:56.603 { 00:11:56.603 "nsid": 1, 00:11:56.603 "bdev_name": "Malloc1", 00:11:56.603 "name": "Malloc1", 00:11:56.603 "nguid": "A6318B975A2D4ED487BAE0191DA7ACF5", 00:11:56.603 "uuid": "a6318b97-5a2d-4ed4-87ba-e0191da7acf5" 00:11:56.603 }, 00:11:56.603 { 00:11:56.603 "nsid": 2, 00:11:56.603 "bdev_name": "Malloc3", 00:11:56.603 "name": "Malloc3", 00:11:56.603 "nguid": "821EBFE27D464969B52FF245862467DA", 00:11:56.603 "uuid": "821ebfe2-7d46-4969-b52f-f245862467da" 00:11:56.603 } 00:11:56.603 ] 00:11:56.603 }, 00:11:56.603 { 00:11:56.603 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:56.603 "subtype": "NVMe", 00:11:56.603 "listen_addresses": [ 00:11:56.603 { 00:11:56.603 "trtype": "VFIOUSER", 00:11:56.603 "adrfam": "IPv4", 00:11:56.603 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:56.603 "trsvcid": "0" 00:11:56.603 } 00:11:56.603 ], 00:11:56.603 "allow_any_host": true, 00:11:56.603 "hosts": [], 00:11:56.603 "serial_number": "SPDK2", 00:11:56.603 "model_number": "SPDK bdev Controller", 00:11:56.603 "max_namespaces": 32, 00:11:56.603 "min_cntlid": 1, 00:11:56.603 "max_cntlid": 65519, 00:11:56.603 "namespaces": [ 00:11:56.603 { 00:11:56.603 "nsid": 1, 00:11:56.603 "bdev_name": "Malloc2", 00:11:56.603 "name": "Malloc2", 00:11:56.603 "nguid": "CDBD641E901B472C9FEE2E7682E891A8", 00:11:56.603 "uuid": "cdbd641e-901b-472c-9fee-2e7682e891a8" 00:11:56.603 } 00:11:56.603 ] 00:11:56.603 } 00:11:56.603 ] 00:11:56.603 15:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:11:56.603 15:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=707363 00:11:56.603 15:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:11:56.603 15:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:11:56.603 15:48:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:11:56.603 15:48:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:56.603 15:48:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:11:56.603 15:48:53 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:11:56.603 15:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:11:56.603 15:48:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:11:56.862 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.862 [2024-07-12 15:48:54.017326] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:11:56.862 Malloc4 00:11:56.862 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:11:57.120 [2024-07-12 15:48:54.347905] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:11:57.120 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:11:57.120 Asynchronous Event Request test 00:11:57.120 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:11:57.120 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:11:57.120 Registering asynchronous event callbacks... 00:11:57.120 Starting namespace attribute notice tests for all controllers... 00:11:57.120 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:11:57.120 aer_cb - Changed Namespace 00:11:57.120 Cleaning up... 00:11:57.377 [ 00:11:57.377 { 00:11:57.377 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:57.377 "subtype": "Discovery", 00:11:57.377 "listen_addresses": [], 00:11:57.377 "allow_any_host": true, 00:11:57.377 "hosts": [] 00:11:57.377 }, 00:11:57.377 { 00:11:57.377 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:11:57.377 "subtype": "NVMe", 00:11:57.377 "listen_addresses": [ 00:11:57.377 { 00:11:57.377 "trtype": "VFIOUSER", 00:11:57.377 "adrfam": "IPv4", 00:11:57.377 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:11:57.377 "trsvcid": "0" 00:11:57.377 } 00:11:57.377 ], 00:11:57.377 "allow_any_host": true, 00:11:57.377 "hosts": [], 00:11:57.377 "serial_number": "SPDK1", 00:11:57.377 "model_number": "SPDK bdev Controller", 00:11:57.377 "max_namespaces": 32, 00:11:57.377 "min_cntlid": 1, 00:11:57.377 "max_cntlid": 65519, 00:11:57.377 "namespaces": [ 00:11:57.377 { 00:11:57.377 "nsid": 1, 00:11:57.377 "bdev_name": "Malloc1", 00:11:57.377 "name": "Malloc1", 00:11:57.377 "nguid": "A6318B975A2D4ED487BAE0191DA7ACF5", 00:11:57.377 "uuid": "a6318b97-5a2d-4ed4-87ba-e0191da7acf5" 00:11:57.377 }, 00:11:57.377 { 00:11:57.377 "nsid": 2, 00:11:57.377 "bdev_name": "Malloc3", 00:11:57.377 "name": "Malloc3", 00:11:57.377 "nguid": "821EBFE27D464969B52FF245862467DA", 00:11:57.377 "uuid": "821ebfe2-7d46-4969-b52f-f245862467da" 00:11:57.377 } 00:11:57.377 ] 00:11:57.377 }, 00:11:57.377 { 00:11:57.377 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:11:57.377 "subtype": "NVMe", 00:11:57.377 "listen_addresses": [ 00:11:57.377 { 00:11:57.377 "trtype": "VFIOUSER", 00:11:57.377 "adrfam": "IPv4", 00:11:57.378 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:11:57.378 "trsvcid": "0" 00:11:57.378 } 00:11:57.378 ], 00:11:57.378 "allow_any_host": true, 00:11:57.378 "hosts": [], 00:11:57.378 "serial_number": "SPDK2", 00:11:57.378 "model_number": "SPDK bdev Controller", 00:11:57.378 "max_namespaces": 32, 00:11:57.378 "min_cntlid": 1, 00:11:57.378 "max_cntlid": 65519, 00:11:57.378 "namespaces": [ 00:11:57.378 { 00:11:57.378 "nsid": 1, 00:11:57.378 "bdev_name": "Malloc2", 00:11:57.378 "name": "Malloc2", 00:11:57.378 "nguid": "CDBD641E901B472C9FEE2E7682E891A8", 00:11:57.378 "uuid": "cdbd641e-901b-472c-9fee-2e7682e891a8" 00:11:57.378 }, 00:11:57.378 { 00:11:57.378 "nsid": 2, 00:11:57.378 "bdev_name": "Malloc4", 00:11:57.378 "name": "Malloc4", 00:11:57.378 "nguid": "BA859D0662224203B5A5A0E0366A0C8A", 00:11:57.378 "uuid": "ba859d06-6222-4203-b5a5-a0e0366a0c8a" 00:11:57.378 } 00:11:57.378 ] 00:11:57.378 } 00:11:57.378 ] 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 707363 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 701248 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 701248 ']' 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 701248 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 701248 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 701248' 00:11:57.378 killing process with pid 701248 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 701248 00:11:57.378 15:48:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 701248 00:11:57.951 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:11:57.951 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:57.951 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:11:57.951 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:11:57.951 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:11:57.951 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=707504 00:11:57.951 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:11:57.951 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 707504' 00:11:57.951 Process pid: 707504 00:11:57.951 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:57.951 15:48:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 707504 00:11:57.951 15:48:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 707504 ']' 00:11:57.951 15:48:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.951 15:48:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.951 15:48:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.951 15:48:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.951 15:48:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:57.951 [2024-07-12 15:48:55.040147] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:11:57.951 [2024-07-12 15:48:55.041157] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:11:57.951 [2024-07-12 15:48:55.041231] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.951 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.951 [2024-07-12 15:48:55.101455] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.951 [2024-07-12 15:48:55.213930] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.951 [2024-07-12 15:48:55.213994] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.951 [2024-07-12 15:48:55.214008] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.951 [2024-07-12 15:48:55.214020] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.951 [2024-07-12 15:48:55.214030] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.951 [2024-07-12 15:48:55.214095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.951 [2024-07-12 15:48:55.214121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.951 [2024-07-12 15:48:55.214178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.951 [2024-07-12 15:48:55.214181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.209 [2024-07-12 15:48:55.319216] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:11:58.209 [2024-07-12 15:48:55.319443] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:11:58.209 [2024-07-12 15:48:55.319726] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:11:58.209 [2024-07-12 15:48:55.320381] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:11:58.209 [2024-07-12 15:48:55.320609] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:11:58.209 15:48:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:58.209 15:48:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:11:58.209 15:48:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:59.142 15:48:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:11:59.400 15:48:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:59.400 15:48:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:59.400 15:48:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:59.400 15:48:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:59.400 15:48:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:59.659 Malloc1 00:11:59.659 15:48:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:59.918 15:48:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:00.175 15:48:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:00.433 15:48:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:00.433 15:48:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:00.433 15:48:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:00.690 Malloc2 00:12:00.690 15:48:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:00.948 15:48:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:01.206 15:48:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:01.464 15:48:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:01.464 15:48:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 707504 00:12:01.464 15:48:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 707504 ']' 00:12:01.464 15:48:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 707504 00:12:01.464 15:48:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:01.464 15:48:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:01.464 15:48:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 707504 00:12:01.464 15:48:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:01.464 15:48:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:01.464 15:48:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 707504' 00:12:01.464 killing process with pid 707504 00:12:01.464 15:48:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 707504 00:12:01.464 15:48:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 707504 00:12:01.721 15:48:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:01.721 15:48:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:01.721 00:12:01.721 real 0m52.515s 00:12:01.721 user 3m27.221s 00:12:01.721 sys 0m4.384s 00:12:01.721 15:48:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.721 15:48:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:01.721 ************************************ 00:12:01.721 END TEST nvmf_vfio_user 00:12:01.721 ************************************ 00:12:01.721 15:48:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:01.721 15:48:58 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:01.721 15:48:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:01.721 15:48:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.721 15:48:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:01.721 ************************************ 00:12:01.721 START TEST nvmf_vfio_user_nvme_compliance 00:12:01.721 ************************************ 00:12:01.721 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:01.979 * Looking for test storage... 00:12:01.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=707985 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 707985' 00:12:01.980 Process pid: 707985 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 707985 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 707985 ']' 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.980 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:01.980 [2024-07-12 15:48:59.118346] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:12:01.980 [2024-07-12 15:48:59.118426] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.980 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.980 [2024-07-12 15:48:59.176713] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:02.238 [2024-07-12 15:48:59.281019] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.238 [2024-07-12 15:48:59.281082] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.238 [2024-07-12 15:48:59.281105] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.238 [2024-07-12 15:48:59.281116] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.238 [2024-07-12 15:48:59.281125] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.238 [2024-07-12 15:48:59.281208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.238 [2024-07-12 15:48:59.281276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.238 [2024-07-12 15:48:59.281279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.238 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:02.238 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:02.238 15:48:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:03.171 malloc0 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:03.171 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.172 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:03.172 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.172 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:03.172 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.172 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:03.429 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.429 15:49:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:03.429 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.429 00:12:03.429 00:12:03.429 CUnit - A unit testing framework for C - Version 2.1-3 00:12:03.429 http://cunit.sourceforge.net/ 00:12:03.429 00:12:03.429 00:12:03.429 Suite: nvme_compliance 00:12:03.429 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 15:49:00.628251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:03.429 [2024-07-12 15:49:00.629653] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:03.429 [2024-07-12 15:49:00.629677] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:03.429 [2024-07-12 15:49:00.629690] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:03.429 [2024-07-12 15:49:00.631264] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:03.429 passed 00:12:03.429 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 15:49:00.715829] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:03.429 [2024-07-12 15:49:00.718846] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:03.686 passed 00:12:03.686 Test: admin_identify_ns ...[2024-07-12 15:49:00.807282] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:03.686 [2024-07-12 15:49:00.866770] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:03.686 [2024-07-12 15:49:00.873783] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:03.686 [2024-07-12 15:49:00.895883] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:03.686 passed 00:12:03.686 Test: admin_get_features_mandatory_features ...[2024-07-12 15:49:00.979608] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:03.944 [2024-07-12 15:49:00.982628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:03.944 passed 00:12:03.944 Test: admin_get_features_optional_features ...[2024-07-12 15:49:01.067196] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:03.944 [2024-07-12 15:49:01.070215] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:03.944 passed 00:12:03.944 Test: admin_set_features_number_of_queues ...[2024-07-12 15:49:01.150246] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.202 [2024-07-12 15:49:01.258841] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.202 passed 00:12:04.202 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 15:49:01.342480] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.202 [2024-07-12 15:49:01.345510] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.202 passed 00:12:04.202 Test: admin_get_log_page_with_lpo ...[2024-07-12 15:49:01.426485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.202 [2024-07-12 15:49:01.494771] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:04.460 [2024-07-12 15:49:01.507851] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.460 passed 00:12:04.460 Test: fabric_property_get ...[2024-07-12 15:49:01.591566] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.460 [2024-07-12 15:49:01.592897] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:04.460 [2024-07-12 15:49:01.594587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.460 passed 00:12:04.460 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 15:49:01.679115] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.460 [2024-07-12 15:49:01.680382] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:04.460 [2024-07-12 15:49:01.682140] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.460 passed 00:12:04.718 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 15:49:01.767252] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.718 [2024-07-12 15:49:01.850747] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:04.718 [2024-07-12 15:49:01.866762] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:04.718 [2024-07-12 15:49:01.871856] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.718 passed 00:12:04.718 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 15:49:01.957206] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.718 [2024-07-12 15:49:01.958478] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:04.718 [2024-07-12 15:49:01.960232] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.718 passed 00:12:04.976 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 15:49:02.044200] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.976 [2024-07-12 15:49:02.119746] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:04.976 [2024-07-12 15:49:02.143763] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:04.976 [2024-07-12 15:49:02.148870] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.976 passed 00:12:04.976 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 15:49:02.232578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:04.976 [2024-07-12 15:49:02.233892] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:04.976 [2024-07-12 15:49:02.233933] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:04.976 [2024-07-12 15:49:02.235598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:04.976 passed 00:12:05.233 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 15:49:02.320744] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.233 [2024-07-12 15:49:02.414746] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:05.233 [2024-07-12 15:49:02.422763] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:05.233 [2024-07-12 15:49:02.430758] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:05.233 [2024-07-12 15:49:02.438779] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:05.233 [2024-07-12 15:49:02.467863] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.233 passed 00:12:05.490 Test: admin_create_io_sq_verify_pc ...[2024-07-12 15:49:02.552621] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:05.490 [2024-07-12 15:49:02.568769] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:05.490 [2024-07-12 15:49:02.586018] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:05.490 passed 00:12:05.490 Test: admin_create_io_qp_max_qps ...[2024-07-12 15:49:02.667582] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:06.861 [2024-07-12 15:49:03.764754] nvme_ctrlr.c:5475:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:12:06.861 [2024-07-12 15:49:04.129789] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:07.118 passed 00:12:07.118 Test: admin_create_io_sq_shared_cq ...[2024-07-12 15:49:04.217098] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:07.118 [2024-07-12 15:49:04.348745] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:07.118 [2024-07-12 15:49:04.385818] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:07.377 passed 00:12:07.377 00:12:07.377 Run Summary: Type Total Ran Passed Failed Inactive 00:12:07.377 suites 1 1 n/a 0 0 00:12:07.377 tests 18 18 18 0 0 00:12:07.377 asserts 360 360 360 0 n/a 00:12:07.377 00:12:07.377 Elapsed time = 1.555 seconds 00:12:07.377 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 707985 00:12:07.377 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 707985 ']' 00:12:07.377 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 707985 00:12:07.377 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:12:07.377 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:07.377 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 707985 00:12:07.377 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:07.377 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:07.377 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 707985' 00:12:07.377 killing process with pid 707985 00:12:07.377 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 707985 00:12:07.377 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 707985 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:07.635 00:12:07.635 real 0m5.727s 00:12:07.635 user 0m16.074s 00:12:07.635 sys 0m0.558s 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:07.635 ************************************ 00:12:07.635 END TEST nvmf_vfio_user_nvme_compliance 00:12:07.635 ************************************ 00:12:07.635 15:49:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:07.635 15:49:04 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:07.635 15:49:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:07.635 15:49:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.635 15:49:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:07.635 ************************************ 00:12:07.635 START TEST nvmf_vfio_user_fuzz 00:12:07.635 ************************************ 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:07.635 * Looking for test storage... 00:12:07.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:07.635 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=708709 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 708709' 00:12:07.636 Process pid: 708709 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 708709 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 708709 ']' 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:07.636 15:49:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:08.199 15:49:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.199 15:49:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:12:08.199 15:49:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:09.129 malloc0 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:09.129 15:49:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:12:41.187 Fuzzing completed. Shutting down the fuzz application 00:12:41.187 00:12:41.187 Dumping successful admin opcodes: 00:12:41.187 8, 9, 10, 24, 00:12:41.187 Dumping successful io opcodes: 00:12:41.187 0, 00:12:41.187 NS: 0x200003a1ef00 I/O qp, Total commands completed: 651015, total successful commands: 2526, random_seed: 3352399360 00:12:41.187 NS: 0x200003a1ef00 admin qp, Total commands completed: 145314, total successful commands: 1179, random_seed: 2279761984 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 708709 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 708709 ']' 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 708709 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 708709 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 708709' 00:12:41.187 killing process with pid 708709 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 708709 00:12:41.187 15:49:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 708709 00:12:41.187 15:49:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:12:41.187 15:49:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:12:41.187 00:12:41.187 real 0m32.287s 00:12:41.187 user 0m29.652s 00:12:41.187 sys 0m29.755s 00:12:41.187 15:49:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:41.187 15:49:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:41.187 ************************************ 00:12:41.187 END TEST nvmf_vfio_user_fuzz 00:12:41.187 ************************************ 00:12:41.187 15:49:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:41.187 15:49:37 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:41.187 15:49:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:41.187 15:49:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.187 15:49:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:41.187 ************************************ 00:12:41.187 START TEST nvmf_host_management 00:12:41.187 ************************************ 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:12:41.187 * Looking for test storage... 00:12:41.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:41.187 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:12:41.188 15:49:37 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.122 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:42.123 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:42.123 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:42.123 Found net devices under 0000:84:00.0: cvl_0_0 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:42.123 Found net devices under 0000:84:00.1: cvl_0_1 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:42.123 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:42.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:12:42.382 00:12:42.382 --- 10.0.0.2 ping statistics --- 00:12:42.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.382 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:12:42.382 00:12:42.382 --- 10.0.0.1 ping statistics --- 00:12:42.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.382 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=714179 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 714179 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 714179 ']' 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.382 15:49:39 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:42.382 [2024-07-12 15:49:39.510964] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:12:42.382 [2024-07-12 15:49:39.511064] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.382 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.382 [2024-07-12 15:49:39.582237] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.640 [2024-07-12 15:49:39.702158] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.640 [2024-07-12 15:49:39.702214] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.640 [2024-07-12 15:49:39.702228] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.640 [2024-07-12 15:49:39.702239] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.640 [2024-07-12 15:49:39.702248] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.640 [2024-07-12 15:49:39.702341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.640 [2024-07-12 15:49:39.702367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.640 [2024-07-12 15:49:39.702429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:42.640 [2024-07-12 15:49:39.702432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.207 [2024-07-12 15:49:40.488530] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:43.207 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.465 15:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:43.465 15:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:43.465 15:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:43.465 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.465 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.465 Malloc0 00:12:43.465 [2024-07-12 15:49:40.549749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.465 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.465 15:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:43.465 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:43.465 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=714349 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 714349 /var/tmp/bdevperf.sock 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 714349 ']' 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:43.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:43.466 { 00:12:43.466 "params": { 00:12:43.466 "name": "Nvme$subsystem", 00:12:43.466 "trtype": "$TEST_TRANSPORT", 00:12:43.466 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:43.466 "adrfam": "ipv4", 00:12:43.466 "trsvcid": "$NVMF_PORT", 00:12:43.466 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:43.466 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:43.466 "hdgst": ${hdgst:-false}, 00:12:43.466 "ddgst": ${ddgst:-false} 00:12:43.466 }, 00:12:43.466 "method": "bdev_nvme_attach_controller" 00:12:43.466 } 00:12:43.466 EOF 00:12:43.466 )") 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:43.466 15:49:40 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:43.466 "params": { 00:12:43.466 "name": "Nvme0", 00:12:43.466 "trtype": "tcp", 00:12:43.466 "traddr": "10.0.0.2", 00:12:43.466 "adrfam": "ipv4", 00:12:43.466 "trsvcid": "4420", 00:12:43.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:43.466 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:43.466 "hdgst": false, 00:12:43.466 "ddgst": false 00:12:43.466 }, 00:12:43.466 "method": "bdev_nvme_attach_controller" 00:12:43.466 }' 00:12:43.466 [2024-07-12 15:49:40.630798] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:12:43.466 [2024-07-12 15:49:40.630876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid714349 ] 00:12:43.466 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.466 [2024-07-12 15:49:40.696276] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.724 [2024-07-12 15:49:40.808171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.982 Running I/O for 10 seconds... 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:12:43.982 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.276 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:44.276 [2024-07-12 15:49:41.500896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501432] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.276 [2024-07-12 15:49:41.501653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.277 [2024-07-12 15:49:41.501665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.277 [2024-07-12 15:49:41.501677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.277 [2024-07-12 15:49:41.501689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.277 [2024-07-12 15:49:41.501701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.277 [2024-07-12 15:49:41.501713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.277 [2024-07-12 15:49:41.501725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.277 [2024-07-12 15:49:41.501746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.277 [2024-07-12 15:49:41.501761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.277 [2024-07-12 15:49:41.501774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.277 [2024-07-12 15:49:41.501786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.277 [2024-07-12 15:49:41.501798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bf6d0 is same with the state(5) to be set 00:12:44.277 [2024-07-12 15:49:41.501951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.501989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.502973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.502988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.503002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.503018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.503032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.503055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.503069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.503084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.277 [2024-07-12 15:49:41.503098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.277 [2024-07-12 15:49:41.503116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:12:44.278 [2024-07-12 15:49:41.503957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.503972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2089200 is same with the state(5) to be set 00:12:44.278 [2024-07-12 15:49:41.504053] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2089200 was disconnected and freed. reset controller. 00:12:44.278 [2024-07-12 15:49:41.504127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.278 [2024-07-12 15:49:41.504149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.504165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.278 [2024-07-12 15:49:41.504179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.504201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.278 [2024-07-12 15:49:41.504215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.504229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:12:44.278 [2024-07-12 15:49:41.504243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:44.278 [2024-07-12 15:49:41.504255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78080 is same with the state(5) to be set 00:12:44.278 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.278 [2024-07-12 15:49:41.505407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:44.278 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:44.278 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.278 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:44.278 task offset: 73728 on job bdev=Nvme0n1 fails 00:12:44.278 00:12:44.278 Latency(us) 00:12:44.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:44.278 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:44.278 Job: Nvme0n1 ended in about 0.40 seconds with error 00:12:44.278 Verification LBA range: start 0x0 length 0x400 00:12:44.278 Nvme0n1 : 0.40 1441.11 90.07 160.12 0.00 38836.45 7670.14 34952.53 00:12:44.278 =================================================================================================================== 00:12:44.278 Total : 1441.11 90.07 160.12 0.00 38836.45 7670.14 34952.53 00:12:44.278 [2024-07-12 15:49:41.507561] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:44.278 [2024-07-12 15:49:41.507605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c78080 (9): Bad file descriptor 00:12:44.278 15:49:41 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.278 15:49:41 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:44.278 [2024-07-12 15:49:41.515527] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 714349 00:12:45.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (714349) - No such process 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:45.232 { 00:12:45.232 "params": { 00:12:45.232 "name": "Nvme$subsystem", 00:12:45.232 "trtype": "$TEST_TRANSPORT", 00:12:45.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:45.232 "adrfam": "ipv4", 00:12:45.232 "trsvcid": "$NVMF_PORT", 00:12:45.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:45.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:45.232 "hdgst": ${hdgst:-false}, 00:12:45.232 "ddgst": ${ddgst:-false} 00:12:45.232 }, 00:12:45.232 "method": "bdev_nvme_attach_controller" 00:12:45.232 } 00:12:45.232 EOF 00:12:45.232 )") 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:45.232 15:49:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:45.232 "params": { 00:12:45.232 "name": "Nvme0", 00:12:45.232 "trtype": "tcp", 00:12:45.232 "traddr": "10.0.0.2", 00:12:45.232 "adrfam": "ipv4", 00:12:45.232 "trsvcid": "4420", 00:12:45.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:45.232 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:45.232 "hdgst": false, 00:12:45.232 "ddgst": false 00:12:45.232 }, 00:12:45.232 "method": "bdev_nvme_attach_controller" 00:12:45.232 }' 00:12:45.490 [2024-07-12 15:49:42.560458] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:12:45.490 [2024-07-12 15:49:42.560544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid714634 ] 00:12:45.490 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.490 [2024-07-12 15:49:42.621146] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.490 [2024-07-12 15:49:42.735035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.748 Running I/O for 1 seconds... 00:12:46.685 00:12:46.685 Latency(us) 00:12:46.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:46.685 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:46.685 Verification LBA range: start 0x0 length 0x400 00:12:46.685 Nvme0n1 : 1.04 1603.95 100.25 0.00 0.00 39271.83 8738.13 33593.27 00:12:46.685 =================================================================================================================== 00:12:46.685 Total : 1603.95 100.25 0.00 0.00 39271.83 8738.13 33593.27 00:12:46.942 15:49:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:46.942 15:49:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:46.942 15:49:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:46.942 15:49:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:46.942 15:49:44 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:46.942 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:46.942 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:46.942 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:46.942 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:46.942 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:46.942 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:47.201 rmmod nvme_tcp 00:12:47.201 rmmod nvme_fabrics 00:12:47.201 rmmod nvme_keyring 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 714179 ']' 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 714179 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 714179 ']' 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 714179 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 714179 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 714179' 00:12:47.201 killing process with pid 714179 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 714179 00:12:47.201 15:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 714179 00:12:47.460 [2024-07-12 15:49:44.574028] app.c: 715:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:47.460 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:47.460 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:47.460 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:47.460 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:47.461 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:47.461 15:49:44 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.461 15:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:47.461 15:49:44 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.365 15:49:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:49.365 15:49:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:49.365 00:12:49.365 real 0m9.522s 00:12:49.365 user 0m22.715s 00:12:49.365 sys 0m2.847s 00:12:49.365 15:49:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:49.365 15:49:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:49.365 ************************************ 00:12:49.365 END TEST nvmf_host_management 00:12:49.365 ************************************ 00:12:49.624 15:49:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:49.624 15:49:46 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:49.624 15:49:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:49.624 15:49:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.624 15:49:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:49.624 ************************************ 00:12:49.624 START TEST nvmf_lvol 00:12:49.624 ************************************ 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:12:49.624 * Looking for test storage... 00:12:49.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:49.624 15:49:46 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.525 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:51.526 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:51.526 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:51.526 Found net devices under 0000:84:00.0: cvl_0_0 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:51.526 Found net devices under 0000:84:00.1: cvl_0_1 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:51.526 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:51.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:12:51.784 00:12:51.784 --- 10.0.0.2 ping statistics --- 00:12:51.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.784 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:12:51.784 00:12:51.784 --- 10.0.0.1 ping statistics --- 00:12:51.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.784 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:51.784 15:49:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:51.785 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=716848 00:12:51.785 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:51.785 15:49:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 716848 00:12:51.785 15:49:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 716848 ']' 00:12:51.785 15:49:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.785 15:49:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:51.785 15:49:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.785 15:49:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:51.785 15:49:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:51.785 [2024-07-12 15:49:49.025421] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:12:51.785 [2024-07-12 15:49:49.025507] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.785 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.042 [2024-07-12 15:49:49.092027] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:52.042 [2024-07-12 15:49:49.196122] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.042 [2024-07-12 15:49:49.196183] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.042 [2024-07-12 15:49:49.196211] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.042 [2024-07-12 15:49:49.196222] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.042 [2024-07-12 15:49:49.196233] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.042 [2024-07-12 15:49:49.196363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.042 [2024-07-12 15:49:49.196472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.042 [2024-07-12 15:49:49.196480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.042 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.042 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:12:52.042 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.042 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:52.042 15:49:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:52.042 15:49:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.042 15:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:52.300 [2024-07-12 15:49:49.548649] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.300 15:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.864 15:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:52.864 15:49:49 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:52.864 15:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:52.864 15:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:53.121 15:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:53.378 15:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=313bf6ba-54ee-4961-8042-304b8d8c6173 00:12:53.378 15:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 313bf6ba-54ee-4961-8042-304b8d8c6173 lvol 20 00:12:53.635 15:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a67aff3f-838f-4b0c-88e0-59f3f4d386c3 00:12:53.635 15:49:50 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:53.891 15:49:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a67aff3f-838f-4b0c-88e0-59f3f4d386c3 00:12:54.148 15:49:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:54.405 [2024-07-12 15:49:51.615038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.406 15:49:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:54.662 15:49:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=717156 00:12:54.662 15:49:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:54.662 15:49:51 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:54.662 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.592 15:49:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a67aff3f-838f-4b0c-88e0-59f3f4d386c3 MY_SNAPSHOT 00:12:56.154 15:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ae29c9bb-1701-4daa-bbcf-b189f557b7ea 00:12:56.154 15:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a67aff3f-838f-4b0c-88e0-59f3f4d386c3 30 00:12:56.411 15:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone ae29c9bb-1701-4daa-bbcf-b189f557b7ea MY_CLONE 00:12:56.669 15:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a598e27d-cc3d-445c-8653-032ba5f73700 00:12:56.669 15:49:53 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a598e27d-cc3d-445c-8653-032ba5f73700 00:12:57.599 15:49:54 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 717156 00:13:05.697 Initializing NVMe Controllers 00:13:05.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:05.697 Controller IO queue size 128, less than required. 00:13:05.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:05.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:05.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:05.697 Initialization complete. Launching workers. 00:13:05.697 ======================================================== 00:13:05.697 Latency(us) 00:13:05.697 Device Information : IOPS MiB/s Average min max 00:13:05.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10589.30 41.36 12061.63 365.71 76779.91 00:13:05.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10475.10 40.92 12189.02 3331.90 79244.25 00:13:05.697 ======================================================== 00:13:05.697 Total : 21064.40 82.28 12124.98 365.71 79244.25 00:13:05.697 00:13:05.697 15:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:05.697 15:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a67aff3f-838f-4b0c-88e0-59f3f4d386c3 00:13:05.697 15:50:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 313bf6ba-54ee-4961-8042-304b8d8c6173 00:13:05.954 15:50:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:05.954 15:50:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:05.954 15:50:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:05.954 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:05.954 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:13:05.954 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:05.954 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:13:05.954 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:05.954 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:05.954 rmmod nvme_tcp 00:13:05.954 rmmod nvme_fabrics 00:13:05.954 rmmod nvme_keyring 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 716848 ']' 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 716848 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 716848 ']' 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 716848 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 716848 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 716848' 00:13:06.212 killing process with pid 716848 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 716848 00:13:06.212 15:50:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 716848 00:13:06.471 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:06.471 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:06.471 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:06.471 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:06.471 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:06.471 15:50:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.471 15:50:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.471 15:50:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.371 15:50:05 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:08.371 00:13:08.371 real 0m18.925s 00:13:08.371 user 1m4.501s 00:13:08.371 sys 0m5.805s 00:13:08.371 15:50:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:08.371 15:50:05 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:08.371 ************************************ 00:13:08.371 END TEST nvmf_lvol 00:13:08.371 ************************************ 00:13:08.371 15:50:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:08.371 15:50:05 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:08.371 15:50:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:08.372 15:50:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:08.372 15:50:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:08.671 ************************************ 00:13:08.671 START TEST nvmf_lvs_grow 00:13:08.671 ************************************ 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:08.671 * Looking for test storage... 00:13:08.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.671 15:50:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:13:08.672 15:50:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:11.197 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:11.197 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:11.197 Found net devices under 0000:84:00.0: cvl_0_0 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:11.197 Found net devices under 0000:84:00.1: cvl_0_1 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:11.197 15:50:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:11.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:13:11.197 00:13:11.197 --- 10.0.0.2 ping statistics --- 00:13:11.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.197 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:11.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:13:11.197 00:13:11.197 --- 10.0.0.1 ping statistics --- 00:13:11.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.197 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=720548 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 720548 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 720548 ']' 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.197 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.198 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.198 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:11.198 [2024-07-12 15:50:08.171753] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:13:11.198 [2024-07-12 15:50:08.171841] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.198 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.198 [2024-07-12 15:50:08.235350] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.198 [2024-07-12 15:50:08.335826] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.198 [2024-07-12 15:50:08.335884] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.198 [2024-07-12 15:50:08.335911] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.198 [2024-07-12 15:50:08.335922] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.198 [2024-07-12 15:50:08.335931] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.198 [2024-07-12 15:50:08.335957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.198 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:11.198 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:13:11.198 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:11.198 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:11.198 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:11.198 15:50:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.198 15:50:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:11.761 [2024-07-12 15:50:08.749230] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:11.761 ************************************ 00:13:11.761 START TEST lvs_grow_clean 00:13:11.761 ************************************ 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:11.761 15:50:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:12.018 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:12.018 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:12.275 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0d6c369a-c187-4bab-89b7-0384b6ee30c0 00:13:12.275 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d6c369a-c187-4bab-89b7-0384b6ee30c0 00:13:12.275 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:12.532 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:12.532 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:12.532 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0d6c369a-c187-4bab-89b7-0384b6ee30c0 lvol 150 00:13:12.789 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=96978132-4a96-4d37-aba0-4527014523af 00:13:12.789 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:12.789 15:50:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:13.046 [2024-07-12 15:50:10.162134] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:13.046 [2024-07-12 15:50:10.162261] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:13.046 true 00:13:13.046 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d6c369a-c187-4bab-89b7-0384b6ee30c0 00:13:13.046 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:13.303 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:13.303 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:13.560 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96978132-4a96-4d37-aba0-4527014523af 00:13:13.817 15:50:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:14.074 [2024-07-12 15:50:11.133121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.074 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:14.331 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=720988 00:13:14.331 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:14.331 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:14.331 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 720988 /var/tmp/bdevperf.sock 00:13:14.331 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 720988 ']' 00:13:14.331 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:14.331 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.331 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:14.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:14.331 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.331 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:14.331 [2024-07-12 15:50:11.432886] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:13:14.331 [2024-07-12 15:50:11.432959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid720988 ] 00:13:14.331 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.331 [2024-07-12 15:50:11.489517] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.331 [2024-07-12 15:50:11.598151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.587 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:14.587 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:13:14.587 15:50:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:14.844 Nvme0n1 00:13:14.844 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:15.100 [ 00:13:15.100 { 00:13:15.100 "name": "Nvme0n1", 00:13:15.100 "aliases": [ 00:13:15.100 "96978132-4a96-4d37-aba0-4527014523af" 00:13:15.100 ], 00:13:15.100 "product_name": "NVMe disk", 00:13:15.100 "block_size": 4096, 00:13:15.100 "num_blocks": 38912, 00:13:15.100 "uuid": "96978132-4a96-4d37-aba0-4527014523af", 00:13:15.100 "assigned_rate_limits": { 00:13:15.100 "rw_ios_per_sec": 0, 00:13:15.100 "rw_mbytes_per_sec": 0, 00:13:15.100 "r_mbytes_per_sec": 0, 00:13:15.101 "w_mbytes_per_sec": 0 00:13:15.101 }, 00:13:15.101 "claimed": false, 00:13:15.101 "zoned": false, 00:13:15.101 "supported_io_types": { 00:13:15.101 "read": true, 00:13:15.101 "write": true, 00:13:15.101 "unmap": true, 00:13:15.101 "flush": true, 00:13:15.101 "reset": true, 00:13:15.101 "nvme_admin": true, 00:13:15.101 "nvme_io": true, 00:13:15.101 "nvme_io_md": false, 00:13:15.101 "write_zeroes": true, 00:13:15.101 "zcopy": false, 00:13:15.101 "get_zone_info": false, 00:13:15.101 "zone_management": false, 00:13:15.101 "zone_append": false, 00:13:15.101 "compare": true, 00:13:15.101 "compare_and_write": true, 00:13:15.101 "abort": true, 00:13:15.101 "seek_hole": false, 00:13:15.101 "seek_data": false, 00:13:15.101 "copy": true, 00:13:15.101 "nvme_iov_md": false 00:13:15.101 }, 00:13:15.101 "memory_domains": [ 00:13:15.101 { 00:13:15.101 "dma_device_id": "system", 00:13:15.101 "dma_device_type": 1 00:13:15.101 } 00:13:15.101 ], 00:13:15.101 "driver_specific": { 00:13:15.101 "nvme": [ 00:13:15.101 { 00:13:15.101 "trid": { 00:13:15.101 "trtype": "TCP", 00:13:15.101 "adrfam": "IPv4", 00:13:15.101 "traddr": "10.0.0.2", 00:13:15.101 "trsvcid": "4420", 00:13:15.101 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:15.101 }, 00:13:15.101 "ctrlr_data": { 00:13:15.101 "cntlid": 1, 00:13:15.101 "vendor_id": "0x8086", 00:13:15.101 "model_number": "SPDK bdev Controller", 00:13:15.101 "serial_number": "SPDK0", 00:13:15.101 "firmware_revision": "24.09", 00:13:15.101 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:15.101 "oacs": { 00:13:15.101 "security": 0, 00:13:15.101 "format": 0, 00:13:15.101 "firmware": 0, 00:13:15.101 "ns_manage": 0 00:13:15.101 }, 00:13:15.101 "multi_ctrlr": true, 00:13:15.101 "ana_reporting": false 00:13:15.101 }, 00:13:15.101 "vs": { 00:13:15.101 "nvme_version": "1.3" 00:13:15.101 }, 00:13:15.101 "ns_data": { 00:13:15.101 "id": 1, 00:13:15.101 "can_share": true 00:13:15.101 } 00:13:15.101 } 00:13:15.101 ], 00:13:15.101 "mp_policy": "active_passive" 00:13:15.101 } 00:13:15.101 } 00:13:15.101 ] 00:13:15.101 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=721008 00:13:15.101 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:15.101 15:50:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:15.379 Running I/O for 10 seconds... 00:13:16.327 Latency(us) 00:13:16.327 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.327 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:16.327 Nvme0n1 : 1.00 16640.00 65.00 0.00 0.00 0.00 0.00 0.00 00:13:16.328 =================================================================================================================== 00:13:16.328 Total : 16640.00 65.00 0.00 0.00 0.00 0.00 0.00 00:13:16.328 00:13:17.261 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0d6c369a-c187-4bab-89b7-0384b6ee30c0 00:13:17.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:17.261 Nvme0n1 : 2.00 16926.00 66.12 0.00 0.00 0.00 0.00 0.00 00:13:17.261 =================================================================================================================== 00:13:17.261 Total : 16926.00 66.12 0.00 0.00 0.00 0.00 0.00 00:13:17.261 00:13:17.519 true 00:13:17.519 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d6c369a-c187-4bab-89b7-0384b6ee30c0 00:13:17.519 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:17.777 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:17.777 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:17.777 15:50:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 721008 00:13:18.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:18.344 Nvme0n1 : 3.00 16938.33 66.17 0.00 0.00 0.00 0.00 0.00 00:13:18.344 =================================================================================================================== 00:13:18.344 Total : 16938.33 66.17 0.00 0.00 0.00 0.00 0.00 00:13:18.344 00:13:19.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:19.278 Nvme0n1 : 4.00 17076.25 66.70 0.00 0.00 0.00 0.00 0.00 00:13:19.278 =================================================================================================================== 00:13:19.278 Total : 17076.25 66.70 0.00 0.00 0.00 0.00 0.00 00:13:19.278 00:13:20.212 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:20.212 Nvme0n1 : 5.00 17134.80 66.93 0.00 0.00 0.00 0.00 0.00 00:13:20.212 =================================================================================================================== 00:13:20.212 Total : 17134.80 66.93 0.00 0.00 0.00 0.00 0.00 00:13:20.212 00:13:21.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:21.587 Nvme0n1 : 6.00 17200.00 67.19 0.00 0.00 0.00 0.00 0.00 00:13:21.587 =================================================================================================================== 00:13:21.587 Total : 17200.00 67.19 0.00 0.00 0.00 0.00 0.00 00:13:21.587 00:13:22.519 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:22.519 Nvme0n1 : 7.00 17201.86 67.19 0.00 0.00 0.00 0.00 0.00 00:13:22.519 =================================================================================================================== 00:13:22.519 Total : 17201.86 67.19 0.00 0.00 0.00 0.00 0.00 00:13:22.519 00:13:23.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:23.452 Nvme0n1 : 8.00 17219.12 67.26 0.00 0.00 0.00 0.00 0.00 00:13:23.452 =================================================================================================================== 00:13:23.452 Total : 17219.12 67.26 0.00 0.00 0.00 0.00 0.00 00:13:23.452 00:13:24.385 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:24.385 Nvme0n1 : 9.00 17251.33 67.39 0.00 0.00 0.00 0.00 0.00 00:13:24.385 =================================================================================================================== 00:13:24.385 Total : 17251.33 67.39 0.00 0.00 0.00 0.00 0.00 00:13:24.385 00:13:25.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:25.319 Nvme0n1 : 10.00 17281.80 67.51 0.00 0.00 0.00 0.00 0.00 00:13:25.319 =================================================================================================================== 00:13:25.319 Total : 17281.80 67.51 0.00 0.00 0.00 0.00 0.00 00:13:25.319 00:13:25.319 00:13:25.319 Latency(us) 00:13:25.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:25.319 Nvme0n1 : 10.01 17284.74 67.52 0.00 0.00 7401.45 2002.49 15146.10 00:13:25.319 =================================================================================================================== 00:13:25.319 Total : 17284.74 67.52 0.00 0.00 7401.45 2002.49 15146.10 00:13:25.319 0 00:13:25.319 15:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 720988 00:13:25.319 15:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 720988 ']' 00:13:25.319 15:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 720988 00:13:25.319 15:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:13:25.319 15:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:25.319 15:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 720988 00:13:25.319 15:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:25.319 15:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:25.319 15:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 720988' 00:13:25.319 killing process with pid 720988 00:13:25.319 15:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 720988 00:13:25.319 Received shutdown signal, test time was about 10.000000 seconds 00:13:25.319 00:13:25.319 Latency(us) 00:13:25.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.320 =================================================================================================================== 00:13:25.320 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:25.320 15:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 720988 00:13:25.577 15:50:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:25.835 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:26.092 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d6c369a-c187-4bab-89b7-0384b6ee30c0 00:13:26.092 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:26.349 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:26.349 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:13:26.349 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:26.606 [2024-07-12 15:50:23.871516] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:26.863 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d6c369a-c187-4bab-89b7-0384b6ee30c0 00:13:26.863 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:13:26.863 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d6c369a-c187-4bab-89b7-0384b6ee30c0 00:13:26.863 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.863 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:26.863 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.863 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:26.863 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.863 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:26.863 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.863 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:26.864 15:50:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d6c369a-c187-4bab-89b7-0384b6ee30c0 00:13:26.864 request: 00:13:26.864 { 00:13:26.864 "uuid": "0d6c369a-c187-4bab-89b7-0384b6ee30c0", 00:13:26.864 "method": "bdev_lvol_get_lvstores", 00:13:26.864 "req_id": 1 00:13:26.864 } 00:13:26.864 Got JSON-RPC error response 00:13:26.864 response: 00:13:26.864 { 00:13:26.864 "code": -19, 00:13:26.864 "message": "No such device" 00:13:26.864 } 00:13:27.120 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:13:27.120 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:27.120 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:27.120 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:27.120 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:27.377 aio_bdev 00:13:27.377 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 96978132-4a96-4d37-aba0-4527014523af 00:13:27.377 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=96978132-4a96-4d37-aba0-4527014523af 00:13:27.377 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:27.377 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:13:27.377 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:27.377 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:27.377 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:27.634 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 96978132-4a96-4d37-aba0-4527014523af -t 2000 00:13:27.634 [ 00:13:27.634 { 00:13:27.634 "name": "96978132-4a96-4d37-aba0-4527014523af", 00:13:27.634 "aliases": [ 00:13:27.634 "lvs/lvol" 00:13:27.634 ], 00:13:27.634 "product_name": "Logical Volume", 00:13:27.634 "block_size": 4096, 00:13:27.634 "num_blocks": 38912, 00:13:27.634 "uuid": "96978132-4a96-4d37-aba0-4527014523af", 00:13:27.634 "assigned_rate_limits": { 00:13:27.634 "rw_ios_per_sec": 0, 00:13:27.634 "rw_mbytes_per_sec": 0, 00:13:27.634 "r_mbytes_per_sec": 0, 00:13:27.634 "w_mbytes_per_sec": 0 00:13:27.634 }, 00:13:27.634 "claimed": false, 00:13:27.634 "zoned": false, 00:13:27.634 "supported_io_types": { 00:13:27.634 "read": true, 00:13:27.634 "write": true, 00:13:27.634 "unmap": true, 00:13:27.634 "flush": false, 00:13:27.634 "reset": true, 00:13:27.634 "nvme_admin": false, 00:13:27.634 "nvme_io": false, 00:13:27.634 "nvme_io_md": false, 00:13:27.634 "write_zeroes": true, 00:13:27.634 "zcopy": false, 00:13:27.634 "get_zone_info": false, 00:13:27.634 "zone_management": false, 00:13:27.634 "zone_append": false, 00:13:27.634 "compare": false, 00:13:27.634 "compare_and_write": false, 00:13:27.634 "abort": false, 00:13:27.634 "seek_hole": true, 00:13:27.634 "seek_data": true, 00:13:27.634 "copy": false, 00:13:27.634 "nvme_iov_md": false 00:13:27.634 }, 00:13:27.634 "driver_specific": { 00:13:27.634 "lvol": { 00:13:27.634 "lvol_store_uuid": "0d6c369a-c187-4bab-89b7-0384b6ee30c0", 00:13:27.634 "base_bdev": "aio_bdev", 00:13:27.634 "thin_provision": false, 00:13:27.634 "num_allocated_clusters": 38, 00:13:27.634 "snapshot": false, 00:13:27.634 "clone": false, 00:13:27.634 "esnap_clone": false 00:13:27.634 } 00:13:27.634 } 00:13:27.634 } 00:13:27.634 ] 00:13:27.634 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:13:27.634 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d6c369a-c187-4bab-89b7-0384b6ee30c0 00:13:27.634 15:50:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:27.891 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:27.891 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0d6c369a-c187-4bab-89b7-0384b6ee30c0 00:13:27.891 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:28.149 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:28.149 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 96978132-4a96-4d37-aba0-4527014523af 00:13:28.406 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0d6c369a-c187-4bab-89b7-0384b6ee30c0 00:13:28.664 15:50:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:28.922 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.180 00:13:29.180 real 0m17.423s 00:13:29.180 user 0m16.881s 00:13:29.180 sys 0m1.973s 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:13:29.180 ************************************ 00:13:29.180 END TEST lvs_grow_clean 00:13:29.180 ************************************ 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:29.180 ************************************ 00:13:29.180 START TEST lvs_grow_dirty 00:13:29.180 ************************************ 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:29.180 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:29.438 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:29.438 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:29.696 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:29.696 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:29.696 15:50:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:29.953 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:29.953 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:29.953 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 55b048f4-28bc-4f61-92bc-edd92a740e0d lvol 150 00:13:30.211 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6af8aeb3-72c5-433c-bab2-fb1c6d27401e 00:13:30.211 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:30.211 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:30.468 [2024-07-12 15:50:27.543991] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:30.468 [2024-07-12 15:50:27.544102] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:30.468 true 00:13:30.468 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:30.468 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:30.725 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:30.725 15:50:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:30.982 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6af8aeb3-72c5-433c-bab2-fb1c6d27401e 00:13:31.239 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:31.496 [2024-07-12 15:50:28.567154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:31.496 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:31.753 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=723045 00:13:31.753 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:31.753 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 723045 /var/tmp/bdevperf.sock 00:13:31.753 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 723045 ']' 00:13:31.753 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:31.753 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:31.753 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.753 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:31.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:31.753 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.753 15:50:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:31.753 [2024-07-12 15:50:28.874513] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:13:31.753 [2024-07-12 15:50:28.874601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid723045 ] 00:13:31.753 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.753 [2024-07-12 15:50:28.933362] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.753 [2024-07-12 15:50:29.041612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.011 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.011 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:32.011 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:32.575 Nvme0n1 00:13:32.575 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:32.832 [ 00:13:32.832 { 00:13:32.832 "name": "Nvme0n1", 00:13:32.832 "aliases": [ 00:13:32.832 "6af8aeb3-72c5-433c-bab2-fb1c6d27401e" 00:13:32.832 ], 00:13:32.832 "product_name": "NVMe disk", 00:13:32.832 "block_size": 4096, 00:13:32.832 "num_blocks": 38912, 00:13:32.832 "uuid": "6af8aeb3-72c5-433c-bab2-fb1c6d27401e", 00:13:32.832 "assigned_rate_limits": { 00:13:32.832 "rw_ios_per_sec": 0, 00:13:32.832 "rw_mbytes_per_sec": 0, 00:13:32.833 "r_mbytes_per_sec": 0, 00:13:32.833 "w_mbytes_per_sec": 0 00:13:32.833 }, 00:13:32.833 "claimed": false, 00:13:32.833 "zoned": false, 00:13:32.833 "supported_io_types": { 00:13:32.833 "read": true, 00:13:32.833 "write": true, 00:13:32.833 "unmap": true, 00:13:32.833 "flush": true, 00:13:32.833 "reset": true, 00:13:32.833 "nvme_admin": true, 00:13:32.833 "nvme_io": true, 00:13:32.833 "nvme_io_md": false, 00:13:32.833 "write_zeroes": true, 00:13:32.833 "zcopy": false, 00:13:32.833 "get_zone_info": false, 00:13:32.833 "zone_management": false, 00:13:32.833 "zone_append": false, 00:13:32.833 "compare": true, 00:13:32.833 "compare_and_write": true, 00:13:32.833 "abort": true, 00:13:32.833 "seek_hole": false, 00:13:32.833 "seek_data": false, 00:13:32.833 "copy": true, 00:13:32.833 "nvme_iov_md": false 00:13:32.833 }, 00:13:32.833 "memory_domains": [ 00:13:32.833 { 00:13:32.833 "dma_device_id": "system", 00:13:32.833 "dma_device_type": 1 00:13:32.833 } 00:13:32.833 ], 00:13:32.833 "driver_specific": { 00:13:32.833 "nvme": [ 00:13:32.833 { 00:13:32.833 "trid": { 00:13:32.833 "trtype": "TCP", 00:13:32.833 "adrfam": "IPv4", 00:13:32.833 "traddr": "10.0.0.2", 00:13:32.833 "trsvcid": "4420", 00:13:32.833 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:32.833 }, 00:13:32.833 "ctrlr_data": { 00:13:32.833 "cntlid": 1, 00:13:32.833 "vendor_id": "0x8086", 00:13:32.833 "model_number": "SPDK bdev Controller", 00:13:32.833 "serial_number": "SPDK0", 00:13:32.833 "firmware_revision": "24.09", 00:13:32.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:32.833 "oacs": { 00:13:32.833 "security": 0, 00:13:32.833 "format": 0, 00:13:32.833 "firmware": 0, 00:13:32.833 "ns_manage": 0 00:13:32.833 }, 00:13:32.833 "multi_ctrlr": true, 00:13:32.833 "ana_reporting": false 00:13:32.833 }, 00:13:32.833 "vs": { 00:13:32.833 "nvme_version": "1.3" 00:13:32.833 }, 00:13:32.833 "ns_data": { 00:13:32.833 "id": 1, 00:13:32.833 "can_share": true 00:13:32.833 } 00:13:32.833 } 00:13:32.833 ], 00:13:32.833 "mp_policy": "active_passive" 00:13:32.833 } 00:13:32.833 } 00:13:32.833 ] 00:13:32.833 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=723178 00:13:32.833 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:32.833 15:50:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:32.833 Running I/O for 10 seconds... 00:13:34.203 Latency(us) 00:13:34.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:34.203 Nvme0n1 : 1.00 16522.00 64.54 0.00 0.00 0.00 0.00 0.00 00:13:34.203 =================================================================================================================== 00:13:34.203 Total : 16522.00 64.54 0.00 0.00 0.00 0.00 0.00 00:13:34.203 00:13:34.769 15:50:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:35.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.027 Nvme0n1 : 2.00 16740.00 65.39 0.00 0.00 0.00 0.00 0.00 00:13:35.027 =================================================================================================================== 00:13:35.027 Total : 16740.00 65.39 0.00 0.00 0.00 0.00 0.00 00:13:35.027 00:13:35.027 true 00:13:35.027 15:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:35.027 15:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:35.284 15:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:35.284 15:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:35.284 15:50:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 723178 00:13:35.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:35.858 Nvme0n1 : 3.00 16879.00 65.93 0.00 0.00 0.00 0.00 0.00 00:13:35.858 =================================================================================================================== 00:13:35.858 Total : 16879.00 65.93 0.00 0.00 0.00 0.00 0.00 00:13:35.858 00:13:36.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:36.794 Nvme0n1 : 4.00 16980.00 66.33 0.00 0.00 0.00 0.00 0.00 00:13:36.794 =================================================================================================================== 00:13:36.794 Total : 16980.00 66.33 0.00 0.00 0.00 0.00 0.00 00:13:36.794 00:13:38.166 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:38.166 Nvme0n1 : 5.00 17041.60 66.57 0.00 0.00 0.00 0.00 0.00 00:13:38.166 =================================================================================================================== 00:13:38.166 Total : 17041.60 66.57 0.00 0.00 0.00 0.00 0.00 00:13:38.166 00:13:39.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:39.099 Nvme0n1 : 6.00 17136.33 66.94 0.00 0.00 0.00 0.00 0.00 00:13:39.099 =================================================================================================================== 00:13:39.099 Total : 17136.33 66.94 0.00 0.00 0.00 0.00 0.00 00:13:39.099 00:13:40.033 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:40.033 Nvme0n1 : 7.00 17210.14 67.23 0.00 0.00 0.00 0.00 0.00 00:13:40.033 =================================================================================================================== 00:13:40.033 Total : 17210.14 67.23 0.00 0.00 0.00 0.00 0.00 00:13:40.033 00:13:41.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.023 Nvme0n1 : 8.00 17218.25 67.26 0.00 0.00 0.00 0.00 0.00 00:13:41.023 =================================================================================================================== 00:13:41.023 Total : 17218.25 67.26 0.00 0.00 0.00 0.00 0.00 00:13:41.023 00:13:41.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:41.961 Nvme0n1 : 9.00 17232.78 67.32 0.00 0.00 0.00 0.00 0.00 00:13:41.961 =================================================================================================================== 00:13:41.961 Total : 17232.78 67.32 0.00 0.00 0.00 0.00 0.00 00:13:41.961 00:13:42.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.895 Nvme0n1 : 10.00 17251.60 67.39 0.00 0.00 0.00 0.00 0.00 00:13:42.895 =================================================================================================================== 00:13:42.895 Total : 17251.60 67.39 0.00 0.00 0.00 0.00 0.00 00:13:42.895 00:13:42.895 00:13:42.895 Latency(us) 00:13:42.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.895 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:42.895 Nvme0n1 : 10.00 17251.54 67.39 0.00 0.00 7415.15 2281.62 14854.83 00:13:42.895 =================================================================================================================== 00:13:42.895 Total : 17251.54 67.39 0.00 0.00 7415.15 2281.62 14854.83 00:13:42.895 0 00:13:42.895 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 723045 00:13:42.895 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 723045 ']' 00:13:42.895 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 723045 00:13:42.895 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:13:42.895 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:42.895 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 723045 00:13:42.895 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:42.895 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:42.895 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 723045' 00:13:42.895 killing process with pid 723045 00:13:42.895 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 723045 00:13:42.895 Received shutdown signal, test time was about 10.000000 seconds 00:13:42.895 00:13:42.895 Latency(us) 00:13:42.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.895 =================================================================================================================== 00:13:42.895 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:42.895 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 723045 00:13:43.152 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:43.717 15:50:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:43.974 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:43.974 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 720548 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 720548 00:13:44.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 720548 Killed "${NVMF_APP[@]}" "$@" 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=724512 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 724512 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 724512 ']' 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.232 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:44.232 [2024-07-12 15:50:41.355522] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:13:44.232 [2024-07-12 15:50:41.355608] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.232 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.232 [2024-07-12 15:50:41.421112] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.490 [2024-07-12 15:50:41.532474] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.490 [2024-07-12 15:50:41.532521] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.490 [2024-07-12 15:50:41.532534] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:44.490 [2024-07-12 15:50:41.532544] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:44.490 [2024-07-12 15:50:41.532553] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.490 [2024-07-12 15:50:41.532583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.490 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.490 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:13:44.490 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:44.490 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:44.490 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:44.490 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:44.490 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:44.747 [2024-07-12 15:50:41.945314] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:44.747 [2024-07-12 15:50:41.945448] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:44.747 [2024-07-12 15:50:41.945494] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:44.747 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:44.747 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6af8aeb3-72c5-433c-bab2-fb1c6d27401e 00:13:44.747 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6af8aeb3-72c5-433c-bab2-fb1c6d27401e 00:13:44.747 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:44.747 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:44.747 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:44.747 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:44.747 15:50:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:45.004 15:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6af8aeb3-72c5-433c-bab2-fb1c6d27401e -t 2000 00:13:45.262 [ 00:13:45.262 { 00:13:45.262 "name": "6af8aeb3-72c5-433c-bab2-fb1c6d27401e", 00:13:45.262 "aliases": [ 00:13:45.262 "lvs/lvol" 00:13:45.262 ], 00:13:45.262 "product_name": "Logical Volume", 00:13:45.262 "block_size": 4096, 00:13:45.262 "num_blocks": 38912, 00:13:45.262 "uuid": "6af8aeb3-72c5-433c-bab2-fb1c6d27401e", 00:13:45.262 "assigned_rate_limits": { 00:13:45.262 "rw_ios_per_sec": 0, 00:13:45.262 "rw_mbytes_per_sec": 0, 00:13:45.262 "r_mbytes_per_sec": 0, 00:13:45.262 "w_mbytes_per_sec": 0 00:13:45.262 }, 00:13:45.262 "claimed": false, 00:13:45.262 "zoned": false, 00:13:45.262 "supported_io_types": { 00:13:45.262 "read": true, 00:13:45.262 "write": true, 00:13:45.262 "unmap": true, 00:13:45.262 "flush": false, 00:13:45.262 "reset": true, 00:13:45.262 "nvme_admin": false, 00:13:45.262 "nvme_io": false, 00:13:45.262 "nvme_io_md": false, 00:13:45.262 "write_zeroes": true, 00:13:45.262 "zcopy": false, 00:13:45.262 "get_zone_info": false, 00:13:45.262 "zone_management": false, 00:13:45.262 "zone_append": false, 00:13:45.262 "compare": false, 00:13:45.262 "compare_and_write": false, 00:13:45.262 "abort": false, 00:13:45.262 "seek_hole": true, 00:13:45.262 "seek_data": true, 00:13:45.262 "copy": false, 00:13:45.262 "nvme_iov_md": false 00:13:45.262 }, 00:13:45.262 "driver_specific": { 00:13:45.262 "lvol": { 00:13:45.262 "lvol_store_uuid": "55b048f4-28bc-4f61-92bc-edd92a740e0d", 00:13:45.262 "base_bdev": "aio_bdev", 00:13:45.262 "thin_provision": false, 00:13:45.262 "num_allocated_clusters": 38, 00:13:45.262 "snapshot": false, 00:13:45.262 "clone": false, 00:13:45.262 "esnap_clone": false 00:13:45.262 } 00:13:45.262 } 00:13:45.262 } 00:13:45.262 ] 00:13:45.262 15:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:45.262 15:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:45.262 15:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:45.520 15:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:45.520 15:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:45.520 15:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:45.778 15:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:45.778 15:50:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:46.035 [2024-07-12 15:50:43.210502] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:46.035 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:46.035 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:13:46.035 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:46.035 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.035 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.035 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.035 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.035 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.035 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.035 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:46.035 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:46.036 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:46.292 request: 00:13:46.292 { 00:13:46.292 "uuid": "55b048f4-28bc-4f61-92bc-edd92a740e0d", 00:13:46.292 "method": "bdev_lvol_get_lvstores", 00:13:46.292 "req_id": 1 00:13:46.292 } 00:13:46.292 Got JSON-RPC error response 00:13:46.292 response: 00:13:46.292 { 00:13:46.292 "code": -19, 00:13:46.292 "message": "No such device" 00:13:46.292 } 00:13:46.292 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:13:46.292 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:46.292 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:46.292 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:46.292 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:46.550 aio_bdev 00:13:46.550 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6af8aeb3-72c5-433c-bab2-fb1c6d27401e 00:13:46.550 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=6af8aeb3-72c5-433c-bab2-fb1c6d27401e 00:13:46.550 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:46.550 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:13:46.550 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:46.550 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:46.550 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:46.808 15:50:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6af8aeb3-72c5-433c-bab2-fb1c6d27401e -t 2000 00:13:47.066 [ 00:13:47.066 { 00:13:47.066 "name": "6af8aeb3-72c5-433c-bab2-fb1c6d27401e", 00:13:47.066 "aliases": [ 00:13:47.066 "lvs/lvol" 00:13:47.066 ], 00:13:47.066 "product_name": "Logical Volume", 00:13:47.066 "block_size": 4096, 00:13:47.066 "num_blocks": 38912, 00:13:47.066 "uuid": "6af8aeb3-72c5-433c-bab2-fb1c6d27401e", 00:13:47.066 "assigned_rate_limits": { 00:13:47.066 "rw_ios_per_sec": 0, 00:13:47.066 "rw_mbytes_per_sec": 0, 00:13:47.066 "r_mbytes_per_sec": 0, 00:13:47.066 "w_mbytes_per_sec": 0 00:13:47.066 }, 00:13:47.066 "claimed": false, 00:13:47.066 "zoned": false, 00:13:47.066 "supported_io_types": { 00:13:47.066 "read": true, 00:13:47.066 "write": true, 00:13:47.066 "unmap": true, 00:13:47.066 "flush": false, 00:13:47.066 "reset": true, 00:13:47.066 "nvme_admin": false, 00:13:47.066 "nvme_io": false, 00:13:47.066 "nvme_io_md": false, 00:13:47.066 "write_zeroes": true, 00:13:47.066 "zcopy": false, 00:13:47.066 "get_zone_info": false, 00:13:47.066 "zone_management": false, 00:13:47.066 "zone_append": false, 00:13:47.066 "compare": false, 00:13:47.066 "compare_and_write": false, 00:13:47.066 "abort": false, 00:13:47.066 "seek_hole": true, 00:13:47.066 "seek_data": true, 00:13:47.066 "copy": false, 00:13:47.066 "nvme_iov_md": false 00:13:47.066 }, 00:13:47.066 "driver_specific": { 00:13:47.066 "lvol": { 00:13:47.066 "lvol_store_uuid": "55b048f4-28bc-4f61-92bc-edd92a740e0d", 00:13:47.066 "base_bdev": "aio_bdev", 00:13:47.066 "thin_provision": false, 00:13:47.066 "num_allocated_clusters": 38, 00:13:47.066 "snapshot": false, 00:13:47.066 "clone": false, 00:13:47.066 "esnap_clone": false 00:13:47.066 } 00:13:47.066 } 00:13:47.066 } 00:13:47.066 ] 00:13:47.066 15:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:13:47.066 15:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:47.066 15:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:47.323 15:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:47.323 15:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:47.323 15:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:47.580 15:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:47.580 15:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6af8aeb3-72c5-433c-bab2-fb1c6d27401e 00:13:47.838 15:50:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55b048f4-28bc-4f61-92bc-edd92a740e0d 00:13:48.096 15:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:48.354 00:13:48.354 real 0m19.203s 00:13:48.354 user 0m48.212s 00:13:48.354 sys 0m5.134s 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:48.354 ************************************ 00:13:48.354 END TEST lvs_grow_dirty 00:13:48.354 ************************************ 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:48.354 nvmf_trace.0 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:48.354 rmmod nvme_tcp 00:13:48.354 rmmod nvme_fabrics 00:13:48.354 rmmod nvme_keyring 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 724512 ']' 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 724512 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 724512 ']' 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 724512 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 724512 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 724512' 00:13:48.354 killing process with pid 724512 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 724512 00:13:48.354 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 724512 00:13:48.613 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.613 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:48.613 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:48.613 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:48.613 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:48.613 15:50:45 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.613 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.613 15:50:45 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.150 15:50:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:51.150 00:13:51.150 real 0m42.236s 00:13:51.150 user 1m10.826s 00:13:51.150 sys 0m9.128s 00:13:51.150 15:50:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:51.150 15:50:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:51.150 ************************************ 00:13:51.150 END TEST nvmf_lvs_grow 00:13:51.150 ************************************ 00:13:51.150 15:50:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:51.150 15:50:47 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:51.150 15:50:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:51.150 15:50:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:51.150 15:50:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:51.150 ************************************ 00:13:51.150 START TEST nvmf_bdev_io_wait 00:13:51.150 ************************************ 00:13:51.150 15:50:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:51.150 * Looking for test storage... 00:13:51.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:51.150 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:51.151 15:50:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:53.051 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:53.051 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:53.051 Found net devices under 0000:84:00.0: cvl_0_0 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:53.051 Found net devices under 0000:84:00.1: cvl_0_1 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:53.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:13:53.051 00:13:53.051 --- 10.0.0.2 ping statistics --- 00:13:53.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.051 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:53.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:13:53.051 00:13:53.051 --- 10.0.0.1 ping statistics --- 00:13:53.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.051 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=727044 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 727044 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 727044 ']' 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.051 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.051 [2024-07-12 15:50:50.283876] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:13:53.051 [2024-07-12 15:50:50.283953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.051 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.308 [2024-07-12 15:50:50.350184] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.308 [2024-07-12 15:50:50.460724] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.308 [2024-07-12 15:50:50.460795] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.308 [2024-07-12 15:50:50.460824] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.308 [2024-07-12 15:50:50.460835] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.308 [2024-07-12 15:50:50.460844] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.308 [2024-07-12 15:50:50.460936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.308 [2024-07-12 15:50:50.460997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.308 [2024-07-12 15:50:50.461068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.308 [2024-07-12 15:50:50.461074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.308 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.308 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:13:53.308 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:53.308 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:53.308 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.308 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.308 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:53.308 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.308 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.309 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.309 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:53.309 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.309 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.309 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.309 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.309 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.309 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.309 [2024-07-12 15:50:50.585685] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.309 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.309 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:53.309 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.309 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.568 Malloc0 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:53.568 [2024-07-12 15:50:50.644979] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=727082 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=727084 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:53.568 { 00:13:53.568 "params": { 00:13:53.568 "name": "Nvme$subsystem", 00:13:53.568 "trtype": "$TEST_TRANSPORT", 00:13:53.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.568 "adrfam": "ipv4", 00:13:53.568 "trsvcid": "$NVMF_PORT", 00:13:53.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.568 "hdgst": ${hdgst:-false}, 00:13:53.568 "ddgst": ${ddgst:-false} 00:13:53.568 }, 00:13:53.568 "method": "bdev_nvme_attach_controller" 00:13:53.568 } 00:13:53.568 EOF 00:13:53.568 )") 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=727086 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:53.568 { 00:13:53.568 "params": { 00:13:53.568 "name": "Nvme$subsystem", 00:13:53.568 "trtype": "$TEST_TRANSPORT", 00:13:53.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.568 "adrfam": "ipv4", 00:13:53.568 "trsvcid": "$NVMF_PORT", 00:13:53.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.568 "hdgst": ${hdgst:-false}, 00:13:53.568 "ddgst": ${ddgst:-false} 00:13:53.568 }, 00:13:53.568 "method": "bdev_nvme_attach_controller" 00:13:53.568 } 00:13:53.568 EOF 00:13:53.568 )") 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=727089 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:53.568 { 00:13:53.568 "params": { 00:13:53.568 "name": "Nvme$subsystem", 00:13:53.568 "trtype": "$TEST_TRANSPORT", 00:13:53.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.568 "adrfam": "ipv4", 00:13:53.568 "trsvcid": "$NVMF_PORT", 00:13:53.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.568 "hdgst": ${hdgst:-false}, 00:13:53.568 "ddgst": ${ddgst:-false} 00:13:53.568 }, 00:13:53.568 "method": "bdev_nvme_attach_controller" 00:13:53.568 } 00:13:53.568 EOF 00:13:53.568 )") 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:53.568 { 00:13:53.568 "params": { 00:13:53.568 "name": "Nvme$subsystem", 00:13:53.568 "trtype": "$TEST_TRANSPORT", 00:13:53.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:53.568 "adrfam": "ipv4", 00:13:53.568 "trsvcid": "$NVMF_PORT", 00:13:53.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:53.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:53.568 "hdgst": ${hdgst:-false}, 00:13:53.568 "ddgst": ${ddgst:-false} 00:13:53.568 }, 00:13:53.568 "method": "bdev_nvme_attach_controller" 00:13:53.568 } 00:13:53.568 EOF 00:13:53.568 )") 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 727082 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:53.568 "params": { 00:13:53.568 "name": "Nvme1", 00:13:53.568 "trtype": "tcp", 00:13:53.568 "traddr": "10.0.0.2", 00:13:53.568 "adrfam": "ipv4", 00:13:53.568 "trsvcid": "4420", 00:13:53.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.568 "hdgst": false, 00:13:53.568 "ddgst": false 00:13:53.568 }, 00:13:53.568 "method": "bdev_nvme_attach_controller" 00:13:53.568 }' 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:53.568 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:53.568 "params": { 00:13:53.568 "name": "Nvme1", 00:13:53.568 "trtype": "tcp", 00:13:53.568 "traddr": "10.0.0.2", 00:13:53.568 "adrfam": "ipv4", 00:13:53.568 "trsvcid": "4420", 00:13:53.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.568 "hdgst": false, 00:13:53.568 "ddgst": false 00:13:53.568 }, 00:13:53.569 "method": "bdev_nvme_attach_controller" 00:13:53.569 }' 00:13:53.569 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:53.569 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:53.569 "params": { 00:13:53.569 "name": "Nvme1", 00:13:53.569 "trtype": "tcp", 00:13:53.569 "traddr": "10.0.0.2", 00:13:53.569 "adrfam": "ipv4", 00:13:53.569 "trsvcid": "4420", 00:13:53.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.569 "hdgst": false, 00:13:53.569 "ddgst": false 00:13:53.569 }, 00:13:53.569 "method": "bdev_nvme_attach_controller" 00:13:53.569 }' 00:13:53.569 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:53.569 15:50:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:53.569 "params": { 00:13:53.569 "name": "Nvme1", 00:13:53.569 "trtype": "tcp", 00:13:53.569 "traddr": "10.0.0.2", 00:13:53.569 "adrfam": "ipv4", 00:13:53.569 "trsvcid": "4420", 00:13:53.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:53.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:53.569 "hdgst": false, 00:13:53.569 "ddgst": false 00:13:53.569 }, 00:13:53.569 "method": "bdev_nvme_attach_controller" 00:13:53.569 }' 00:13:53.569 [2024-07-12 15:50:50.691064] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:13:53.569 [2024-07-12 15:50:50.691064] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:13:53.569 [2024-07-12 15:50:50.691152] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:53.569 [2024-07-12 15:50:50.691157] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:53.569 [2024-07-12 15:50:50.693183] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:13:53.569 [2024-07-12 15:50:50.693176] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:13:53.569 [2024-07-12 15:50:50.693258] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 15:50:50.693258] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:53.569 --proc-type=auto ] 00:13:53.569 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.569 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.827 [2024-07-12 15:50:50.865228] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.827 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.827 [2024-07-12 15:50:50.965838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:53.827 [2024-07-12 15:50:50.967150] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.827 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.827 [2024-07-12 15:50:51.041471] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.827 [2024-07-12 15:50:51.065424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:53.827 [2024-07-12 15:50:51.113273] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.085 [2024-07-12 15:50:51.137842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:54.085 [2024-07-12 15:50:51.208189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:54.343 Running I/O for 1 seconds... 00:13:54.343 Running I/O for 1 seconds... 00:13:54.343 Running I/O for 1 seconds... 00:13:54.343 Running I/O for 1 seconds... 00:13:55.274 00:13:55.274 Latency(us) 00:13:55.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.274 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:55.274 Nvme1n1 : 1.01 10494.45 40.99 0.00 0.00 12142.68 8349.77 21651.15 00:13:55.274 =================================================================================================================== 00:13:55.274 Total : 10494.45 40.99 0.00 0.00 12142.68 8349.77 21651.15 00:13:55.274 00:13:55.274 Latency(us) 00:13:55.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.274 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:55.274 Nvme1n1 : 1.01 8569.69 33.48 0.00 0.00 14866.59 4878.79 22913.33 00:13:55.274 =================================================================================================================== 00:13:55.274 Total : 8569.69 33.48 0.00 0.00 14866.59 4878.79 22913.33 00:13:55.274 00:13:55.274 Latency(us) 00:13:55.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.274 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:55.274 Nvme1n1 : 1.00 200423.46 782.90 0.00 0.00 635.98 263.96 983.04 00:13:55.274 =================================================================================================================== 00:13:55.274 Total : 200423.46 782.90 0.00 0.00 635.98 263.96 983.04 00:13:55.274 00:13:55.274 Latency(us) 00:13:55.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.274 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:55.274 Nvme1n1 : 1.01 9267.82 36.20 0.00 0.00 13753.63 7136.14 24563.86 00:13:55.274 =================================================================================================================== 00:13:55.274 Total : 9267.82 36.20 0.00 0.00 13753.63 7136.14 24563.86 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 727084 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 727086 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 727089 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:55.838 rmmod nvme_tcp 00:13:55.838 rmmod nvme_fabrics 00:13:55.838 rmmod nvme_keyring 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 727044 ']' 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 727044 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 727044 ']' 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 727044 00:13:55.838 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:13:55.839 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.839 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 727044 00:13:55.839 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:55.839 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:55.839 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 727044' 00:13:55.839 killing process with pid 727044 00:13:55.839 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 727044 00:13:55.839 15:50:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 727044 00:13:56.095 15:50:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:56.095 15:50:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:56.095 15:50:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:56.095 15:50:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.095 15:50:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:56.095 15:50:53 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.095 15:50:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.095 15:50:53 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.997 15:50:55 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.997 00:13:57.997 real 0m7.311s 00:13:57.997 user 0m17.503s 00:13:57.997 sys 0m3.541s 00:13:57.997 15:50:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:57.997 15:50:55 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:57.997 ************************************ 00:13:57.997 END TEST nvmf_bdev_io_wait 00:13:57.997 ************************************ 00:13:58.255 15:50:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:58.255 15:50:55 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:58.255 15:50:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:58.255 15:50:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.255 15:50:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:58.255 ************************************ 00:13:58.255 START TEST nvmf_queue_depth 00:13:58.255 ************************************ 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:58.255 * Looking for test storage... 00:13:58.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.255 15:50:55 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.256 15:50:55 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:00.783 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:00.783 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:00.783 Found net devices under 0000:84:00.0: cvl_0_0 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:00.783 Found net devices under 0000:84:00.1: cvl_0_1 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:00.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:14:00.783 00:14:00.783 --- 10.0.0.2 ping statistics --- 00:14:00.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.783 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:14:00.783 00:14:00.783 --- 10.0.0.1 ping statistics --- 00:14:00.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.783 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:14:00.783 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=729319 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 729319 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 729319 ']' 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 [2024-07-12 15:50:57.709249] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:14:00.784 [2024-07-12 15:50:57.709321] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.784 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.784 [2024-07-12 15:50:57.776443] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.784 [2024-07-12 15:50:57.884760] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.784 [2024-07-12 15:50:57.884824] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.784 [2024-07-12 15:50:57.884838] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.784 [2024-07-12 15:50:57.884850] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.784 [2024-07-12 15:50:57.884860] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.784 [2024-07-12 15:50:57.884887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:00.784 15:50:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 [2024-07-12 15:50:58.009358] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 Malloc0 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 [2024-07-12 15:50:58.070289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=729460 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:00.784 15:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 729460 /var/tmp/bdevperf.sock 00:14:01.042 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 729460 ']' 00:14:01.042 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.042 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.042 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.042 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.042 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:01.042 [2024-07-12 15:50:58.119030] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:14:01.042 [2024-07-12 15:50:58.119116] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid729460 ] 00:14:01.042 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.042 [2024-07-12 15:50:58.176295] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.042 [2024-07-12 15:50:58.290100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.299 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.299 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:14:01.299 15:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:01.299 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.299 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:01.556 NVMe0n1 00:14:01.557 15:50:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.557 15:50:58 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:01.557 Running I/O for 10 seconds... 00:14:13.750 00:14:13.750 Latency(us) 00:14:13.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.750 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:13.750 Verification LBA range: start 0x0 length 0x4000 00:14:13.750 NVMe0n1 : 10.09 9818.09 38.35 0.00 0.00 103877.77 20680.25 65633.09 00:14:13.750 =================================================================================================================== 00:14:13.750 Total : 9818.09 38.35 0.00 0.00 103877.77 20680.25 65633.09 00:14:13.750 0 00:14:13.750 15:51:08 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 729460 00:14:13.750 15:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 729460 ']' 00:14:13.750 15:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 729460 00:14:13.750 15:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:13.750 15:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:13.750 15:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 729460 00:14:13.750 15:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:13.750 15:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:13.750 15:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 729460' 00:14:13.750 killing process with pid 729460 00:14:13.750 15:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 729460 00:14:13.750 Received shutdown signal, test time was about 10.000000 seconds 00:14:13.750 00:14:13.750 Latency(us) 00:14:13.750 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:13.750 =================================================================================================================== 00:14:13.750 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:13.750 15:51:08 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 729460 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:13.750 rmmod nvme_tcp 00:14:13.750 rmmod nvme_fabrics 00:14:13.750 rmmod nvme_keyring 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 729319 ']' 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 729319 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 729319 ']' 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 729319 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 729319 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 729319' 00:14:13.750 killing process with pid 729319 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 729319 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 729319 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.750 15:51:09 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.318 15:51:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:14.318 00:14:14.318 real 0m16.214s 00:14:14.318 user 0m22.398s 00:14:14.318 sys 0m3.450s 00:14:14.318 15:51:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:14.318 15:51:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:14.318 ************************************ 00:14:14.318 END TEST nvmf_queue_depth 00:14:14.318 ************************************ 00:14:14.318 15:51:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:14.318 15:51:11 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:14.318 15:51:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:14.318 15:51:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:14.318 15:51:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:14.318 ************************************ 00:14:14.318 START TEST nvmf_target_multipath 00:14:14.318 ************************************ 00:14:14.318 15:51:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:14.576 * Looking for test storage... 00:14:14.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.576 15:51:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:14:14.577 15:51:11 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.498 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:16.499 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:16.499 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:16.499 Found net devices under 0000:84:00.0: cvl_0_0 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:16.499 Found net devices under 0000:84:00.1: cvl_0_1 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.499 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:14:16.756 00:14:16.756 --- 10.0.0.2 ping statistics --- 00:14:16.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.756 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:14:16.756 00:14:16.756 --- 10.0.0.1 ping statistics --- 00:14:16.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.756 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:14:16.756 only one NIC for nvmf test 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:16.756 rmmod nvme_tcp 00:14:16.756 rmmod nvme_fabrics 00:14:16.756 rmmod nvme_keyring 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:16.756 15:51:13 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.290 00:14:19.290 real 0m4.447s 00:14:19.290 user 0m0.870s 00:14:19.290 sys 0m1.569s 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.290 15:51:16 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:14:19.290 ************************************ 00:14:19.290 END TEST nvmf_target_multipath 00:14:19.290 ************************************ 00:14:19.290 15:51:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:19.290 15:51:16 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:19.290 15:51:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:19.290 15:51:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.290 15:51:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:19.290 ************************************ 00:14:19.290 START TEST nvmf_zcopy 00:14:19.290 ************************************ 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:14:19.290 * Looking for test storage... 00:14:19.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.290 15:51:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.291 15:51:16 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:21.187 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:21.187 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:21.187 Found net devices under 0000:84:00.0: cvl_0_0 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:21.187 Found net devices under 0000:84:00.1: cvl_0_1 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:21.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:14:21.187 00:14:21.187 --- 10.0.0.2 ping statistics --- 00:14:21.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.187 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:14:21.187 00:14:21.187 --- 10.0.0.1 ping statistics --- 00:14:21.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.187 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=735286 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 735286 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 735286 ']' 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.187 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:21.188 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.188 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:21.188 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.188 [2024-07-12 15:51:18.467142] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:14:21.188 [2024-07-12 15:51:18.467238] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.445 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.445 [2024-07-12 15:51:18.531545] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.445 [2024-07-12 15:51:18.637768] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.445 [2024-07-12 15:51:18.637820] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.445 [2024-07-12 15:51:18.637844] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.445 [2024-07-12 15:51:18.637863] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.445 [2024-07-12 15:51:18.637873] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.445 [2024-07-12 15:51:18.637905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.702 [2024-07-12 15:51:18.790388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.702 [2024-07-12 15:51:18.806568] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.702 malloc0 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:21.702 { 00:14:21.702 "params": { 00:14:21.702 "name": "Nvme$subsystem", 00:14:21.702 "trtype": "$TEST_TRANSPORT", 00:14:21.702 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:21.702 "adrfam": "ipv4", 00:14:21.702 "trsvcid": "$NVMF_PORT", 00:14:21.702 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:21.702 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:21.702 "hdgst": ${hdgst:-false}, 00:14:21.702 "ddgst": ${ddgst:-false} 00:14:21.702 }, 00:14:21.702 "method": "bdev_nvme_attach_controller" 00:14:21.702 } 00:14:21.702 EOF 00:14:21.702 )") 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:21.702 15:51:18 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:21.702 "params": { 00:14:21.702 "name": "Nvme1", 00:14:21.702 "trtype": "tcp", 00:14:21.702 "traddr": "10.0.0.2", 00:14:21.702 "adrfam": "ipv4", 00:14:21.702 "trsvcid": "4420", 00:14:21.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:21.702 "hdgst": false, 00:14:21.702 "ddgst": false 00:14:21.702 }, 00:14:21.702 "method": "bdev_nvme_attach_controller" 00:14:21.702 }' 00:14:21.702 [2024-07-12 15:51:18.890542] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:14:21.702 [2024-07-12 15:51:18.890618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid735310 ] 00:14:21.702 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.702 [2024-07-12 15:51:18.954518] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.959 [2024-07-12 15:51:19.064553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.216 Running I/O for 10 seconds... 00:14:32.180 00:14:32.180 Latency(us) 00:14:32.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.180 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:32.180 Verification LBA range: start 0x0 length 0x1000 00:14:32.181 Nvme1n1 : 10.01 6500.34 50.78 0.00 0.00 19639.59 2184.53 29709.65 00:14:32.181 =================================================================================================================== 00:14:32.181 Total : 6500.34 50.78 0.00 0.00 19639.59 2184.53 29709.65 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=736505 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:32.439 { 00:14:32.439 "params": { 00:14:32.439 "name": "Nvme$subsystem", 00:14:32.439 "trtype": "$TEST_TRANSPORT", 00:14:32.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:32.439 "adrfam": "ipv4", 00:14:32.439 "trsvcid": "$NVMF_PORT", 00:14:32.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:32.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:32.439 "hdgst": ${hdgst:-false}, 00:14:32.439 "ddgst": ${ddgst:-false} 00:14:32.439 }, 00:14:32.439 "method": "bdev_nvme_attach_controller" 00:14:32.439 } 00:14:32.439 EOF 00:14:32.439 )") 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:14:32.439 [2024-07-12 15:51:29.597530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.597579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:14:32.439 15:51:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:32.439 "params": { 00:14:32.439 "name": "Nvme1", 00:14:32.439 "trtype": "tcp", 00:14:32.439 "traddr": "10.0.0.2", 00:14:32.439 "adrfam": "ipv4", 00:14:32.439 "trsvcid": "4420", 00:14:32.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:32.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:32.439 "hdgst": false, 00:14:32.439 "ddgst": false 00:14:32.439 }, 00:14:32.439 "method": "bdev_nvme_attach_controller" 00:14:32.439 }' 00:14:32.439 [2024-07-12 15:51:29.605480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.605502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.613503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.613524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.621524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.621544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.629548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.629569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.633858] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:14:32.439 [2024-07-12 15:51:29.633924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid736505 ] 00:14:32.439 [2024-07-12 15:51:29.637566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.637586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.645588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.645608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.653611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.653632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.661636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.661658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.439 [2024-07-12 15:51:29.669655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.669675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.677678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.677699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.685700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.685734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.693743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.693764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.694385] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.439 [2024-07-12 15:51:29.701799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.701833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.709822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.709860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.717823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.717860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.439 [2024-07-12 15:51:29.725827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.439 [2024-07-12 15:51:29.725849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.733862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.733884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.741869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.741892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.749889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.749910] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.757941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.757975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.765951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.765982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.773953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.773974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.781998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.782034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.789997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.790018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.798033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.798055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.806058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.806080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.807107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.698 [2024-07-12 15:51:29.814077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.814098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.822119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.822147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.830141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.830178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.838180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.838218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.846203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.846241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.854209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.854245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.862237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.862279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.870256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.870295] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.878239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.878267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.886288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.886326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.894307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.894346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.902303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.902324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.910328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.698 [2024-07-12 15:51:29.910349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.698 [2024-07-12 15:51:29.918344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.699 [2024-07-12 15:51:29.918364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.699 [2024-07-12 15:51:29.926386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.699 [2024-07-12 15:51:29.926409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.699 [2024-07-12 15:51:29.934396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.699 [2024-07-12 15:51:29.934420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.699 [2024-07-12 15:51:29.942418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.699 [2024-07-12 15:51:29.942440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.699 [2024-07-12 15:51:29.950442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.699 [2024-07-12 15:51:29.950465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.699 [2024-07-12 15:51:29.958467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.699 [2024-07-12 15:51:29.958490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.699 [2024-07-12 15:51:29.966483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.699 [2024-07-12 15:51:29.966516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.699 [2024-07-12 15:51:29.974502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.699 [2024-07-12 15:51:29.974524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.699 [2024-07-12 15:51:29.982526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.699 [2024-07-12 15:51:29.982546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.699 [2024-07-12 15:51:29.990557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.699 [2024-07-12 15:51:29.990581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.956 Running I/O for 5 seconds... 00:14:32.956 [2024-07-12 15:51:29.998586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.956 [2024-07-12 15:51:29.998607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.956 [2024-07-12 15:51:30.013115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.956 [2024-07-12 15:51:30.013145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.956 [2024-07-12 15:51:30.023603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.956 [2024-07-12 15:51:30.023634] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.956 [2024-07-12 15:51:30.034064] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.956 [2024-07-12 15:51:30.034092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.956 [2024-07-12 15:51:30.044451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.956 [2024-07-12 15:51:30.044481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.956 [2024-07-12 15:51:30.055175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.956 [2024-07-12 15:51:30.055214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.956 [2024-07-12 15:51:30.067557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.956 [2024-07-12 15:51:30.067583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.078903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.078930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.087808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.087837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.099101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.099125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.110983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.111011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.120747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.120774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.132382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.132406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.141568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.141592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.151626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.151651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.161401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.161425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.171413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.171438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.181501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.181525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.191457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.191482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.201510] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.201534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.211172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.211196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.221277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.221302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.233355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.233379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.957 [2024-07-12 15:51:30.243048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:32.957 [2024-07-12 15:51:30.243095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.253868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.253894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.265638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.265662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.275149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.275173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.286885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.286912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.296351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.296375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.306008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.306048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.315773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.315812] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.325939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.325965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.338140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.338164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.347673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.347697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.357428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.357453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.367409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.367434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.377583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.377608] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.387591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.387615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.397371] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.397395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.407303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.407328] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.417466] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.417490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.427523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.427547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.437596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.437621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.447460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.447484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.457248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.457272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.467255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.467279] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.477524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.477548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.487773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.487817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.215 [2024-07-12 15:51:30.497928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.215 [2024-07-12 15:51:30.497954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.508687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.508715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.520857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.520883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.530377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.530401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.540070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.540109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.549829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.549856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.559584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.559609] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.569711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.569759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.580027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.580053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.590367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.590392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.600544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.600569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.610929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.610955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.621435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.621460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.633530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.633555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.642844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.642870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.653417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.653442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.665066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.665092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.673816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.673842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.684433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.684458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.694797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.694823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.705206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.705231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.715324] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.715348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.725238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.725263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.735452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.735477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.745422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.745447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.755456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.755480] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.474 [2024-07-12 15:51:30.766091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.474 [2024-07-12 15:51:30.766116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.732 [2024-07-12 15:51:30.778178] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.732 [2024-07-12 15:51:30.778204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.732 [2024-07-12 15:51:30.787381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.732 [2024-07-12 15:51:30.787406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.732 [2024-07-12 15:51:30.797340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.732 [2024-07-12 15:51:30.797366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.732 [2024-07-12 15:51:30.807659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.732 [2024-07-12 15:51:30.807684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.732 [2024-07-12 15:51:30.820001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.732 [2024-07-12 15:51:30.820041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.830046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.830073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.840838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.840866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.852769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.852795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.862061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.862101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.872009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.872049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.882169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.882194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.892695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.892734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.902662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.902688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.912612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.912638] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.922681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.922706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.932983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.933012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.943236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.943261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.953628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.953653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.964034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.964060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.976911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.976937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.988389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.988414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:30.997189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:30.997214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:31.008404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:31.008429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.733 [2024-07-12 15:51:31.018556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.733 [2024-07-12 15:51:31.018581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.029747] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.029787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.042247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.042273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.051597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.051623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.063482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.063507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.072920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.072946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.083544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.083569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.094823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.094850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.107552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.107578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.117459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.117486] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.128235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.128261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.140611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.140637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.150528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.150553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.161300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.161326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.171546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.171571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.182084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.182125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.192162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.192187] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.202164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.202188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.212643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.212668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.223058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.223105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.233732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.233764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.243606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.243631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.253629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.253653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.263795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.263821] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:33.991 [2024-07-12 15:51:31.274114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:33.991 [2024-07-12 15:51:31.274139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.284840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.284868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.294971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.294996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.305341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.305365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.317199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.317224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.326475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.326500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.337358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.337383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.349045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.349072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.358287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.358312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.368583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.368607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.380429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.380455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.390063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.390105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.400351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.400375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.412993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.413034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.422288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.422321] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.434641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.434666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.446248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.446273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.455219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.455243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.466007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.466048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.477914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.477941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.486959] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.486985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.496561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.496586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.506554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.506579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.516748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.516774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.526901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.526927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.250 [2024-07-12 15:51:31.537391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.250 [2024-07-12 15:51:31.537416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.548666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.548692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.558847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.558872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.570429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.570454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.580139] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.580165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.590420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.590444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.600168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.600194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.612176] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.612200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.623302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.623333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.631908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.631934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.642448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.642473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.652493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.652517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.662282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.662306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.671618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.671642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.681253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.681278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.690932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.690958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.700858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.700884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.710684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.710708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.720896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.720922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.732359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.732384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.742028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.742053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.751867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.751892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.761629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.761653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.771563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.771587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.781539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.508 [2024-07-12 15:51:31.781563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.508 [2024-07-12 15:51:31.791261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.509 [2024-07-12 15:51:31.791285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.801961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.801989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.814162] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.814193] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.823440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.823464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.835300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.835326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.844823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.844849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.854981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.855008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.864869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.864895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.874950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.874977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.884882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.884909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.895246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.895271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.905113] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.905138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.914581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.914606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.924507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.924532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.934640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.934664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.944681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.944707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.954534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.954559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.964305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.964331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.974375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.974401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.984695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.984736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:31.994904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:31.994931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:32.004944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:32.004972] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:32.014635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:32.014660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:32.024329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:32.024354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:32.034650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:32.034674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:32.044413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:32.044438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:34.767 [2024-07-12 15:51:32.054586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:34.767 [2024-07-12 15:51:32.054610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.064914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.064942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.074907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.074934] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.084862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.084889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.094760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.094787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.104659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.104683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.114607] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.114632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.125070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.125095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.135079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.135120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.145302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.145327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.155276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.155302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.165689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.165715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.178631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.178656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.188312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.188337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.197991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.198019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.207996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.208037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.218125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.218150] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.228492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.228519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.239562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.239588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.250333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.250359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.262418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.262442] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.273907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.273935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.282456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.282481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.294384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.294409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.305748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.305775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.026 [2024-07-12 15:51:32.314476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.026 [2024-07-12 15:51:32.314502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.326040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.326067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.337709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.337758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.347306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.347330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.358994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.359034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.368496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.368522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.378804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.378830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.391770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.391810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.403226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.403251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.411966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.411992] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.423169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.423194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.433408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.433433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.443205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.443231] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.453333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.453359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.463362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.463387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.473511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.284 [2024-07-12 15:51:32.473536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.284 [2024-07-12 15:51:32.484049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.285 [2024-07-12 15:51:32.484074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.285 [2024-07-12 15:51:32.496407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.285 [2024-07-12 15:51:32.496432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.285 [2024-07-12 15:51:32.506358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.285 [2024-07-12 15:51:32.506383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.285 [2024-07-12 15:51:32.516673] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.285 [2024-07-12 15:51:32.516697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.285 [2024-07-12 15:51:32.525994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.285 [2024-07-12 15:51:32.526034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.285 [2024-07-12 15:51:32.536252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.285 [2024-07-12 15:51:32.536277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.285 [2024-07-12 15:51:32.548684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.285 [2024-07-12 15:51:32.548709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.285 [2024-07-12 15:51:32.558927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.285 [2024-07-12 15:51:32.558955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.285 [2024-07-12 15:51:32.568808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.285 [2024-07-12 15:51:32.568834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.579310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.579335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.589567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.589592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.599851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.599878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.612652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.612678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.622439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.622464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.632874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.632900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.643373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.643398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.653948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.653975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.666581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.666606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.676414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.676439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.686851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.686878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.699170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.699195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.708120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.708146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.718779] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.718805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.729265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.729290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.741824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.741850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.753497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.753522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.762855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.762882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.773053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.773079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.783452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.783477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.793692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.793744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.804074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.804099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.816004] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.816043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.825072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.825096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.543 [2024-07-12 15:51:32.835815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.543 [2024-07-12 15:51:32.835843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.846528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.846554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.856783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.856809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.868957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.868984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.878390] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.878414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.888421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.888446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.898694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.898734] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.909356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.909381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.919357] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.919382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.929383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.929408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.939191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.939217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.948824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.948851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.958889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.958916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.969189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.969215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.980208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.980234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:32.990402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:32.990434] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:33.003279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:33.003304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:33.014802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:33.014829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:33.023441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:33.023466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:33.035358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:33.035383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:33.044699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:33.044747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:33.055509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:33.055534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:33.065488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:33.065513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:33.075570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:33.075595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:35.801 [2024-07-12 15:51:33.086119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:35.801 [2024-07-12 15:51:33.086146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.097262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.097303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.107745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.107772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.117924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.117951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.127953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.127979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.138306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.138332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.148340] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.148365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.158016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.158057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.168309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.168333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.178447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.178472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.188692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.188746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.198750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.198776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.208984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.209011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.220971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.220998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.230479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.230504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.240293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.240317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.250457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.250481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.260352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.260377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.270292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.270317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.280425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.280450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.290506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.290531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.301893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.301919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.313329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.313353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.322074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.322112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.332566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.332590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.342404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.342429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.060 [2024-07-12 15:51:33.352861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.060 [2024-07-12 15:51:33.352889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.363102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.363127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.373033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.373058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.383875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.383909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.394996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.395037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.408438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.408463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.418587] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.418612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.428237] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.428262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.438145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.438170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.447908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.447935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.458279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.458304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.468921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.468948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.479233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.479258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.489582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.489607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.499821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.499847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.509955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.509982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.520165] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.520189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.530239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.530264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.540468] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.540493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.550794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.550820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.562249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.562273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.573460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.573485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.582114] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.582139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.594145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.594170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.319 [2024-07-12 15:51:33.603449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.319 [2024-07-12 15:51:33.603474] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.614957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.614986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.626823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.626850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.635599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.635623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.646302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.646327] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.656271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.656296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.665951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.665977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.676057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.676095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.686071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.686110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.695918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.695945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.705949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.705976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.716126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.716151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.726681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.726706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.736660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.736684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.746907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.746933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.756929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.756956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.767151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.767177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.777065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.777112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.788674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.788699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.797893] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.797918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.808773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.808814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.819152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.819177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.829130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.829156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.839458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.839484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.849818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.849845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.578 [2024-07-12 15:51:33.860273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.578 [2024-07-12 15:51:33.860298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.871558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.871587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.881754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.881781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.891743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.891770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.902135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.902160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.912212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.912237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.922312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.922337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.932519] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.932543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.942641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.942666] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.952958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.952985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.963632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.963657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.973772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.973813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.984217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.984242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:33.993903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:33.993930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:34.003854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:34.003881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:34.013624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:34.013649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:34.023621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:34.023646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:34.034247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:34.034273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:34.046096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:34.046121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:34.056101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:34.056126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:34.067039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:34.067066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:34.077278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:34.077303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:34.087307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:34.087332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:34.097774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:34.097801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:34.110177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.836 [2024-07-12 15:51:34.110202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:36.836 [2024-07-12 15:51:34.120135] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:36.837 [2024-07-12 15:51:34.120160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.130404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.130429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.140877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.140904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.150564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.150589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.160856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.160883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.178172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.178198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.188161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.188186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.198138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.198164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.208183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.208208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.218331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.218356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.228887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.228914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.239523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.239550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.251222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.251249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.260036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.260062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.270539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.270563] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.280915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.280941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.290995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.291037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.301060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.301085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.310853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.310882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.320939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.320965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.330983] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.331009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.341110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.341135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.351040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.351065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.360939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.360973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.371055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.371079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.095 [2024-07-12 15:51:34.381439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.095 [2024-07-12 15:51:34.381462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.353 [2024-07-12 15:51:34.392676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.353 [2024-07-12 15:51:34.392700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.353 [2024-07-12 15:51:34.403034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.353 [2024-07-12 15:51:34.403059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.353 [2024-07-12 15:51:34.415403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.415427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.425050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.425074] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.435263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.435288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.446038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.446062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.457626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.457650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.467065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.467103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.477469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.477493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.489866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.489893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.501360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.501384] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.510051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.510093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.520905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.520930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.533110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.533134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.543301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.543326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.553620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.553645] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.564418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.564449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.576325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.576361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.586329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.586355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.596684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.596709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.606919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.606946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.617808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.617835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.630316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.630341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.354 [2024-07-12 15:51:34.640625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.354 [2024-07-12 15:51:34.640651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.651798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.651825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.662500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.662525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.672964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.672991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.683857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.683884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.694552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.694577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.705649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.705675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.716632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.716657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.729170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.729196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.739080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.739120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.749870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.749896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.760429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.760453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.771031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.771062] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.783502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.783527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.793940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.793968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.804766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.804793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.816937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.816967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.827278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.827303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.838125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.838151] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.848576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.848602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.859045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.859071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.871593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.871618] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.881815] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.881842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.892310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.892335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.613 [2024-07-12 15:51:34.902883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.613 [2024-07-12 15:51:34.902909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:34.913868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:34.913894] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:34.924440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:34.924464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:34.934916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:34.934943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:34.947581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:34.947606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:34.958201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:34.958226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:34.968909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:34.968935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:34.981631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:34.981665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:34.991962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:34.991988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.002862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.002896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.015305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.015330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.020627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.020651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 00:14:37.872 Latency(us) 00:14:37.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.872 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:37.872 Nvme1n1 : 5.01 12437.97 97.17 0.00 0.00 10276.35 4417.61 22719.15 00:14:37.872 =================================================================================================================== 00:14:37.872 Total : 12437.97 97.17 0.00 0.00 10276.35 4417.61 22719.15 00:14:37.872 [2024-07-12 15:51:35.028645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.028668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.036665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.036688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.044721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.044763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.052795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.052856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.060808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.060861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.068831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.068881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.076863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.076917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.084872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.084918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.092900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.092952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.100920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.100974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.108922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.108973] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.116948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.117000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.124972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.125023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.132992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.133043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.141005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.141058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.149031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.149080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.157053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.157104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:37.872 [2024-07-12 15:51:35.165046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:37.872 [2024-07-12 15:51:35.165096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.173061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.173097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.181074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.181109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.189110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.189130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.197118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.197140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.205207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.205262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.213215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.213264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.221184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.221210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.229193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.229212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.237215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.237235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.245234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.245254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.253280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.253308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.261341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.261391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.269359] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.269409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.277323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.277344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.285342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.285361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 [2024-07-12 15:51:35.293377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:38.143 [2024-07-12 15:51:35.293396] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:38.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (736505) - No such process 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 736505 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.143 delay0 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.143 15:51:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:38.143 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.402 [2024-07-12 15:51:35.452832] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:44.952 [2024-07-12 15:51:41.677669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5cefb0 is same with the state(5) to be set 00:14:44.952 Initializing NVMe Controllers 00:14:44.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:44.952 Initialization complete. Launching workers. 00:14:44.952 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1041 00:14:44.952 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1323, failed to submit 38 00:14:44.952 success 1147, unsuccess 176, failed 0 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:44.952 rmmod nvme_tcp 00:14:44.952 rmmod nvme_fabrics 00:14:44.952 rmmod nvme_keyring 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 735286 ']' 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 735286 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 735286 ']' 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 735286 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 735286 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 735286' 00:14:44.952 killing process with pid 735286 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 735286 00:14:44.952 15:51:41 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 735286 00:14:44.952 15:51:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:44.952 15:51:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:44.952 15:51:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:44.952 15:51:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:44.952 15:51:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:44.952 15:51:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.952 15:51:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.952 15:51:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.856 15:51:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:46.856 00:14:46.856 real 0m28.006s 00:14:46.856 user 0m39.975s 00:14:46.856 sys 0m9.671s 00:14:46.856 15:51:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.856 15:51:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:46.856 ************************************ 00:14:46.856 END TEST nvmf_zcopy 00:14:46.856 ************************************ 00:14:46.856 15:51:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:46.856 15:51:44 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:46.856 15:51:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:46.856 15:51:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.856 15:51:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:46.856 ************************************ 00:14:46.856 START TEST nvmf_nmic 00:14:46.856 ************************************ 00:14:46.856 15:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:47.113 * Looking for test storage... 00:14:47.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:14:47.113 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:14:47.114 15:51:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:49.661 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:49.661 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:49.661 Found net devices under 0000:84:00.0: cvl_0_0 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:49.661 Found net devices under 0000:84:00.1: cvl_0_1 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.661 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:49.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:14:49.661 00:14:49.661 --- 10.0.0.2 ping statistics --- 00:14:49.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.662 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:14:49.662 00:14:49.662 --- 10.0.0.1 ping statistics --- 00:14:49.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.662 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=739904 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 739904 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 739904 ']' 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.662 [2024-07-12 15:51:46.589664] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:14:49.662 [2024-07-12 15:51:46.589751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.662 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.662 [2024-07-12 15:51:46.652362] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.662 [2024-07-12 15:51:46.763076] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.662 [2024-07-12 15:51:46.763143] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.662 [2024-07-12 15:51:46.763168] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.662 [2024-07-12 15:51:46.763179] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.662 [2024-07-12 15:51:46.763189] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.662 [2024-07-12 15:51:46.763257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.662 [2024-07-12 15:51:46.763655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.662 [2024-07-12 15:51:46.763715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.662 [2024-07-12 15:51:46.763712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.662 [2024-07-12 15:51:46.919706] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.662 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.662 Malloc0 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 [2024-07-12 15:51:46.973531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:49.920 test case1: single bdev can't be used in multiple subsystems 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.920 15:51:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 [2024-07-12 15:51:46.997411] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:49.920 [2024-07-12 15:51:46.997440] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:49.920 [2024-07-12 15:51:46.997470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:49.920 request: 00:14:49.920 { 00:14:49.920 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:49.920 "namespace": { 00:14:49.920 "bdev_name": "Malloc0", 00:14:49.920 "no_auto_visible": false 00:14:49.920 }, 00:14:49.920 "method": "nvmf_subsystem_add_ns", 00:14:49.920 "req_id": 1 00:14:49.920 } 00:14:49.920 Got JSON-RPC error response 00:14:49.920 response: 00:14:49.920 { 00:14:49.920 "code": -32602, 00:14:49.920 "message": "Invalid parameters" 00:14:49.920 } 00:14:49.920 15:51:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:49.920 15:51:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:49.920 15:51:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:49.920 15:51:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:49.920 Adding namespace failed - expected result. 00:14:49.920 15:51:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:49.920 test case2: host connect to nvmf target in multiple paths 00:14:49.920 15:51:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:49.920 15:51:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.920 15:51:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 [2024-07-12 15:51:47.005507] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:49.920 15:51:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.920 15:51:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:50.486 15:51:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:51.053 15:51:48 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:51.053 15:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:14:51.053 15:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.053 15:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:51.053 15:51:48 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:14:52.989 15:51:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:52.989 15:51:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:52.989 15:51:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.989 15:51:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:52.989 15:51:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.989 15:51:50 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:14:52.989 15:51:50 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:52.989 [global] 00:14:52.989 thread=1 00:14:52.989 invalidate=1 00:14:52.989 rw=write 00:14:52.989 time_based=1 00:14:52.989 runtime=1 00:14:52.989 ioengine=libaio 00:14:52.989 direct=1 00:14:52.989 bs=4096 00:14:52.989 iodepth=1 00:14:52.989 norandommap=0 00:14:52.989 numjobs=1 00:14:52.989 00:14:52.989 verify_dump=1 00:14:52.989 verify_backlog=512 00:14:52.989 verify_state_save=0 00:14:52.989 do_verify=1 00:14:52.989 verify=crc32c-intel 00:14:52.989 [job0] 00:14:52.989 filename=/dev/nvme0n1 00:14:53.246 Could not set queue depth (nvme0n1) 00:14:53.246 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:53.246 fio-3.35 00:14:53.246 Starting 1 thread 00:14:54.617 00:14:54.617 job0: (groupid=0, jobs=1): err= 0: pid=740538: Fri Jul 12 15:51:51 2024 00:14:54.617 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:54.617 slat (nsec): min=6005, max=72825, avg=14033.07, stdev=10181.75 00:14:54.617 clat (usec): min=192, max=41939, avg=1615.61, stdev=7306.52 00:14:54.617 lat (usec): min=200, max=41970, avg=1629.65, stdev=7307.53 00:14:54.617 clat percentiles (usec): 00:14:54.617 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:14:54.617 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 251], 00:14:54.617 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 392], 95.00th=[ 429], 00:14:54.617 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:14:54.617 | 99.99th=[41681] 00:14:54.617 write: IOPS=688, BW=2753KiB/s (2819kB/s)(2756KiB/1001msec); 0 zone resets 00:14:54.617 slat (usec): min=9, max=29462, avg=59.05, stdev=1121.81 00:14:54.617 clat (usec): min=137, max=278, avg=174.13, stdev=24.64 00:14:54.617 lat (usec): min=149, max=29700, avg=233.18, stdev=1124.61 00:14:54.617 clat percentiles (usec): 00:14:54.617 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:14:54.617 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 176], 00:14:54.617 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 221], 00:14:54.617 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 281], 99.95th=[ 281], 00:14:54.617 | 99.99th=[ 281] 00:14:54.617 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:14:54.617 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:54.617 lat (usec) : 250=82.35%, 500=16.24% 00:14:54.617 lat (msec) : 50=1.42% 00:14:54.617 cpu : usr=1.00%, sys=1.80%, ctx=1204, majf=0, minf=2 00:14:54.617 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:54.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:54.617 issued rwts: total=512,689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:54.617 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:54.617 00:14:54.617 Run status group 0 (all jobs): 00:14:54.617 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:14:54.617 WRITE: bw=2753KiB/s (2819kB/s), 2753KiB/s-2753KiB/s (2819kB/s-2819kB/s), io=2756KiB (2822kB), run=1001-1001msec 00:14:54.617 00:14:54.617 Disk stats (read/write): 00:14:54.617 nvme0n1: ios=354/512, merge=0/0, ticks=1768/90, in_queue=1858, util=98.60% 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:54.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:54.617 rmmod nvme_tcp 00:14:54.617 rmmod nvme_fabrics 00:14:54.617 rmmod nvme_keyring 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 739904 ']' 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 739904 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 739904 ']' 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 739904 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 739904 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 739904' 00:14:54.617 killing process with pid 739904 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 739904 00:14:54.617 15:51:51 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 739904 00:14:54.875 15:51:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:54.875 15:51:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:54.875 15:51:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:54.875 15:51:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.875 15:51:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:54.875 15:51:52 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.875 15:51:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.875 15:51:52 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.408 15:51:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:57.408 00:14:57.408 real 0m10.063s 00:14:57.408 user 0m22.266s 00:14:57.408 sys 0m2.484s 00:14:57.408 15:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.408 15:51:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:57.408 ************************************ 00:14:57.408 END TEST nvmf_nmic 00:14:57.408 ************************************ 00:14:57.408 15:51:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:57.408 15:51:54 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:57.408 15:51:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:57.408 15:51:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.408 15:51:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:57.408 ************************************ 00:14:57.408 START TEST nvmf_fio_target 00:14:57.408 ************************************ 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:57.408 * Looking for test storage... 00:14:57.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.408 15:51:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:57.409 15:51:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:59.308 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:59.308 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.308 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:59.309 Found net devices under 0000:84:00.0: cvl_0_0 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:59.309 Found net devices under 0000:84:00.1: cvl_0_1 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:59.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:14:59.309 00:14:59.309 --- 10.0.0.2 ping statistics --- 00:14:59.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.309 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:14:59.309 00:14:59.309 --- 10.0.0.1 ping statistics --- 00:14:59.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.309 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.309 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=742632 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 742632 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 742632 ']' 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:59.567 15:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.567 [2024-07-12 15:51:56.658206] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:14:59.567 [2024-07-12 15:51:56.658284] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.567 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.567 [2024-07-12 15:51:56.722723] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.567 [2024-07-12 15:51:56.824733] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.567 [2024-07-12 15:51:56.824795] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.567 [2024-07-12 15:51:56.824823] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.567 [2024-07-12 15:51:56.824834] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.567 [2024-07-12 15:51:56.824843] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.567 [2024-07-12 15:51:56.824942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.567 [2024-07-12 15:51:56.825003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.567 [2024-07-12 15:51:56.825076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.567 [2024-07-12 15:51:56.825079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.824 15:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.824 15:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:14:59.824 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:59.824 15:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:59.824 15:51:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.824 15:51:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.824 15:51:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:00.081 [2024-07-12 15:51:57.203133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.081 15:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:00.338 15:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:00.338 15:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:00.595 15:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:00.595 15:51:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:00.852 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:00.852 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:01.110 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:01.110 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:01.367 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:01.625 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:01.625 15:51:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:01.882 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:01.882 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:02.140 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:02.140 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:02.397 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:02.654 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:02.654 15:51:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:02.912 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:02.912 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:03.169 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.425 [2024-07-12 15:52:00.562189] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.425 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:03.683 15:52:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:03.940 15:52:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:04.504 15:52:01 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:04.504 15:52:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:15:04.504 15:52:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:04.504 15:52:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:15:04.504 15:52:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:15:04.504 15:52:01 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:15:06.397 15:52:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:06.397 15:52:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:06.397 15:52:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:06.397 15:52:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:15:06.397 15:52:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:06.397 15:52:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:15:06.397 15:52:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:06.654 [global] 00:15:06.654 thread=1 00:15:06.654 invalidate=1 00:15:06.654 rw=write 00:15:06.654 time_based=1 00:15:06.654 runtime=1 00:15:06.655 ioengine=libaio 00:15:06.655 direct=1 00:15:06.655 bs=4096 00:15:06.655 iodepth=1 00:15:06.655 norandommap=0 00:15:06.655 numjobs=1 00:15:06.655 00:15:06.655 verify_dump=1 00:15:06.655 verify_backlog=512 00:15:06.655 verify_state_save=0 00:15:06.655 do_verify=1 00:15:06.655 verify=crc32c-intel 00:15:06.655 [job0] 00:15:06.655 filename=/dev/nvme0n1 00:15:06.655 [job1] 00:15:06.655 filename=/dev/nvme0n2 00:15:06.655 [job2] 00:15:06.655 filename=/dev/nvme0n3 00:15:06.655 [job3] 00:15:06.655 filename=/dev/nvme0n4 00:15:06.655 Could not set queue depth (nvme0n1) 00:15:06.655 Could not set queue depth (nvme0n2) 00:15:06.655 Could not set queue depth (nvme0n3) 00:15:06.655 Could not set queue depth (nvme0n4) 00:15:06.655 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.655 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.655 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.655 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:06.655 fio-3.35 00:15:06.655 Starting 4 threads 00:15:08.026 00:15:08.026 job0: (groupid=0, jobs=1): err= 0: pid=743699: Fri Jul 12 15:52:05 2024 00:15:08.026 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:15:08.026 slat (nsec): min=6748, max=42061, avg=8187.22, stdev=2752.80 00:15:08.026 clat (usec): min=178, max=41054, avg=1645.59, stdev=7512.50 00:15:08.026 lat (usec): min=185, max=41068, avg=1653.78, stdev=7513.36 00:15:08.026 clat percentiles (usec): 00:15:08.026 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:15:08.026 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 215], 00:15:08.026 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 247], 95.00th=[ 265], 00:15:08.026 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:08.026 | 99.99th=[41157] 00:15:08.026 write: IOPS=708, BW=2833KiB/s (2901kB/s)(2836KiB/1001msec); 0 zone resets 00:15:08.026 slat (nsec): min=8586, max=71789, avg=15548.15, stdev=9940.72 00:15:08.026 clat (usec): min=136, max=717, avg=194.83, stdev=50.75 00:15:08.026 lat (usec): min=146, max=727, avg=210.38, stdev=54.16 00:15:08.026 clat percentiles (usec): 00:15:08.026 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 159], 00:15:08.026 | 30.00th=[ 167], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 194], 00:15:08.026 | 70.00th=[ 208], 80.00th=[ 223], 90.00th=[ 247], 95.00th=[ 269], 00:15:08.026 | 99.00th=[ 318], 99.50th=[ 437], 99.90th=[ 717], 99.95th=[ 717], 00:15:08.026 | 99.99th=[ 717] 00:15:08.026 bw ( KiB/s): min= 4096, max= 4096, per=24.86%, avg=4096.00, stdev= 0.00, samples=1 00:15:08.026 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:08.026 lat (usec) : 250=91.15%, 500=7.13%, 750=0.25% 00:15:08.026 lat (msec) : 50=1.47% 00:15:08.026 cpu : usr=1.10%, sys=1.80%, ctx=1221, majf=0, minf=1 00:15:08.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.026 issued rwts: total=512,709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.026 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.026 job1: (groupid=0, jobs=1): err= 0: pid=743700: Fri Jul 12 15:52:05 2024 00:15:08.026 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:15:08.026 slat (nsec): min=5670, max=37455, avg=7383.98, stdev=2782.07 00:15:08.026 clat (usec): min=171, max=40899, avg=251.47, stdev=899.48 00:15:08.026 lat (usec): min=177, max=40908, avg=258.86, stdev=899.54 00:15:08.026 clat percentiles (usec): 00:15:08.026 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 198], 20.00th=[ 206], 00:15:08.026 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 235], 00:15:08.026 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 277], 00:15:08.026 | 99.00th=[ 429], 99.50th=[ 490], 99.90th=[ 578], 99.95th=[ 594], 00:15:08.026 | 99.99th=[41157] 00:15:08.026 write: IOPS=2473, BW=9894KiB/s (10.1MB/s)(9904KiB/1001msec); 0 zone resets 00:15:08.026 slat (nsec): min=7456, max=57093, avg=10858.08, stdev=5441.38 00:15:08.026 clat (usec): min=121, max=818, avg=174.39, stdev=52.00 00:15:08.026 lat (usec): min=130, max=828, avg=185.25, stdev=54.56 00:15:08.026 clat percentiles (usec): 00:15:08.026 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 141], 00:15:08.026 | 30.00th=[ 145], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 165], 00:15:08.026 | 70.00th=[ 176], 80.00th=[ 194], 90.00th=[ 239], 95.00th=[ 273], 00:15:08.026 | 99.00th=[ 371], 99.50th=[ 412], 99.90th=[ 603], 99.95th=[ 717], 00:15:08.026 | 99.99th=[ 816] 00:15:08.026 bw ( KiB/s): min= 8192, max= 8192, per=49.73%, avg=8192.00, stdev= 0.00, samples=1 00:15:08.026 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:08.026 lat (usec) : 250=86.16%, 500=13.57%, 750=0.22%, 1000=0.02% 00:15:08.026 lat (msec) : 50=0.02% 00:15:08.026 cpu : usr=3.10%, sys=5.50%, ctx=4525, majf=0, minf=1 00:15:08.026 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.027 issued rwts: total=2048,2476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.027 job2: (groupid=0, jobs=1): err= 0: pid=743701: Fri Jul 12 15:52:05 2024 00:15:08.027 read: IOPS=147, BW=591KiB/s (605kB/s)(604KiB/1022msec) 00:15:08.027 slat (nsec): min=4756, max=29685, avg=12654.56, stdev=3869.85 00:15:08.027 clat (usec): min=306, max=41968, avg=5782.36, stdev=13841.09 00:15:08.027 lat (usec): min=320, max=41985, avg=5795.02, stdev=13841.73 00:15:08.027 clat percentiles (usec): 00:15:08.027 | 1.00th=[ 314], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 359], 00:15:08.027 | 30.00th=[ 363], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 383], 00:15:08.027 | 70.00th=[ 404], 80.00th=[ 519], 90.00th=[41157], 95.00th=[41157], 00:15:08.027 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:08.027 | 99.99th=[42206] 00:15:08.027 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:15:08.027 slat (nsec): min=6903, max=62252, avg=16999.36, stdev=9698.25 00:15:08.027 clat (usec): min=163, max=495, avg=264.05, stdev=66.63 00:15:08.027 lat (usec): min=172, max=536, avg=281.05, stdev=68.79 00:15:08.027 clat percentiles (usec): 00:15:08.027 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 200], 20.00th=[ 217], 00:15:08.027 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 258], 00:15:08.027 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 363], 95.00th=[ 424], 00:15:08.027 | 99.00th=[ 474], 99.50th=[ 482], 99.90th=[ 494], 99.95th=[ 494], 00:15:08.027 | 99.99th=[ 494] 00:15:08.027 bw ( KiB/s): min= 4096, max= 4096, per=24.86%, avg=4096.00, stdev= 0.00, samples=1 00:15:08.027 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:08.027 lat (usec) : 250=42.99%, 500=52.04%, 750=1.81%, 1000=0.15% 00:15:08.027 lat (msec) : 50=3.02% 00:15:08.027 cpu : usr=0.69%, sys=0.88%, ctx=663, majf=0, minf=1 00:15:08.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.027 issued rwts: total=151,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.027 job3: (groupid=0, jobs=1): err= 0: pid=743702: Fri Jul 12 15:52:05 2024 00:15:08.027 read: IOPS=139, BW=556KiB/s (570kB/s)(568KiB/1021msec) 00:15:08.027 slat (nsec): min=6630, max=34003, avg=9166.13, stdev=3708.90 00:15:08.027 clat (usec): min=333, max=41464, avg=6103.41, stdev=14185.64 00:15:08.027 lat (usec): min=347, max=41477, avg=6112.57, stdev=14187.45 00:15:08.027 clat percentiles (usec): 00:15:08.027 | 1.00th=[ 351], 5.00th=[ 363], 10.00th=[ 367], 20.00th=[ 367], 00:15:08.027 | 30.00th=[ 371], 40.00th=[ 371], 50.00th=[ 371], 60.00th=[ 375], 00:15:08.027 | 70.00th=[ 375], 80.00th=[ 412], 90.00th=[41157], 95.00th=[41157], 00:15:08.027 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:08.027 | 99.99th=[41681] 00:15:08.027 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:15:08.027 slat (usec): min=9, max=27074, avg=69.87, stdev=1195.82 00:15:08.027 clat (usec): min=153, max=464, avg=223.16, stdev=48.10 00:15:08.027 lat (usec): min=163, max=27345, avg=293.03, stdev=1199.00 00:15:08.027 clat percentiles (usec): 00:15:08.027 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 184], 00:15:08.027 | 30.00th=[ 190], 40.00th=[ 200], 50.00th=[ 212], 60.00th=[ 229], 00:15:08.027 | 70.00th=[ 243], 80.00th=[ 255], 90.00th=[ 289], 95.00th=[ 310], 00:15:08.027 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 465], 99.95th=[ 465], 00:15:08.027 | 99.99th=[ 465] 00:15:08.027 bw ( KiB/s): min= 4096, max= 4096, per=24.86%, avg=4096.00, stdev= 0.00, samples=1 00:15:08.027 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:08.027 lat (usec) : 250=59.94%, 500=36.54%, 750=0.46% 00:15:08.027 lat (msec) : 50=3.06% 00:15:08.027 cpu : usr=0.88%, sys=0.88%, ctx=656, majf=0, minf=2 00:15:08.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:08.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.027 issued rwts: total=142,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:08.027 00:15:08.027 Run status group 0 (all jobs): 00:15:08.027 READ: bw=10.9MiB/s (11.4MB/s), 556KiB/s-8184KiB/s (570kB/s-8380kB/s), io=11.1MiB (11.7MB), run=1001-1022msec 00:15:08.027 WRITE: bw=16.1MiB/s (16.9MB/s), 2004KiB/s-9894KiB/s (2052kB/s-10.1MB/s), io=16.4MiB (17.2MB), run=1001-1022msec 00:15:08.027 00:15:08.027 Disk stats (read/write): 00:15:08.027 nvme0n1: ios=67/512, merge=0/0, ticks=720/98, in_queue=818, util=86.97% 00:15:08.027 nvme0n2: ios=1796/2048, merge=0/0, ticks=593/356, in_queue=949, util=89.73% 00:15:08.027 nvme0n3: ios=201/512, merge=0/0, ticks=747/129, in_queue=876, util=95.09% 00:15:08.027 nvme0n4: ios=184/512, merge=0/0, ticks=936/113, in_queue=1049, util=95.06% 00:15:08.027 15:52:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:08.027 [global] 00:15:08.027 thread=1 00:15:08.027 invalidate=1 00:15:08.027 rw=randwrite 00:15:08.027 time_based=1 00:15:08.027 runtime=1 00:15:08.027 ioengine=libaio 00:15:08.027 direct=1 00:15:08.027 bs=4096 00:15:08.027 iodepth=1 00:15:08.027 norandommap=0 00:15:08.027 numjobs=1 00:15:08.027 00:15:08.027 verify_dump=1 00:15:08.027 verify_backlog=512 00:15:08.027 verify_state_save=0 00:15:08.027 do_verify=1 00:15:08.027 verify=crc32c-intel 00:15:08.027 [job0] 00:15:08.027 filename=/dev/nvme0n1 00:15:08.027 [job1] 00:15:08.027 filename=/dev/nvme0n2 00:15:08.027 [job2] 00:15:08.027 filename=/dev/nvme0n3 00:15:08.027 [job3] 00:15:08.027 filename=/dev/nvme0n4 00:15:08.027 Could not set queue depth (nvme0n1) 00:15:08.027 Could not set queue depth (nvme0n2) 00:15:08.027 Could not set queue depth (nvme0n3) 00:15:08.027 Could not set queue depth (nvme0n4) 00:15:08.284 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.284 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.284 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.284 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:08.284 fio-3.35 00:15:08.284 Starting 4 threads 00:15:09.656 00:15:09.656 job0: (groupid=0, jobs=1): err= 0: pid=743928: Fri Jul 12 15:52:06 2024 00:15:09.656 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:15:09.656 slat (nsec): min=5079, max=54345, avg=13676.18, stdev=8839.72 00:15:09.656 clat (usec): min=183, max=40993, avg=383.86, stdev=2126.04 00:15:09.656 lat (usec): min=189, max=41006, avg=397.53, stdev=2126.01 00:15:09.656 clat percentiles (usec): 00:15:09.656 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 217], 00:15:09.656 | 30.00th=[ 227], 40.00th=[ 239], 50.00th=[ 260], 60.00th=[ 273], 00:15:09.656 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 338], 95.00th=[ 363], 00:15:09.656 | 99.00th=[ 490], 99.50th=[ 586], 99.90th=[41157], 99.95th=[41157], 00:15:09.656 | 99.99th=[41157] 00:15:09.656 write: IOPS=1786, BW=7145KiB/s (7316kB/s)(7152KiB/1001msec); 0 zone resets 00:15:09.656 slat (nsec): min=6069, max=52240, avg=10693.05, stdev=5800.12 00:15:09.656 clat (usec): min=123, max=567, avg=200.43, stdev=79.36 00:15:09.656 lat (usec): min=130, max=587, avg=211.12, stdev=81.41 00:15:09.656 clat percentiles (usec): 00:15:09.656 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 145], 00:15:09.656 | 30.00th=[ 151], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 180], 00:15:09.656 | 70.00th=[ 202], 80.00th=[ 255], 90.00th=[ 322], 95.00th=[ 383], 00:15:09.656 | 99.00th=[ 453], 99.50th=[ 494], 99.90th=[ 545], 99.95th=[ 570], 00:15:09.656 | 99.99th=[ 570] 00:15:09.656 bw ( KiB/s): min= 8192, max= 8192, per=43.24%, avg=8192.00, stdev= 0.00, samples=1 00:15:09.656 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:09.656 lat (usec) : 250=63.96%, 500=35.44%, 750=0.42%, 1000=0.03% 00:15:09.656 lat (msec) : 20=0.03%, 50=0.12% 00:15:09.656 cpu : usr=1.60%, sys=4.80%, ctx=3324, majf=0, minf=1 00:15:09.656 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:09.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.656 issued rwts: total=1536,1788,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.656 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:09.656 job1: (groupid=0, jobs=1): err= 0: pid=743929: Fri Jul 12 15:52:06 2024 00:15:09.656 read: IOPS=26, BW=107KiB/s (110kB/s)(108KiB/1008msec) 00:15:09.656 slat (nsec): min=6451, max=39575, avg=18031.37, stdev=10708.63 00:15:09.656 clat (usec): min=266, max=42088, avg=31510.95, stdev=17179.29 00:15:09.656 lat (usec): min=272, max=42100, avg=31528.98, stdev=17180.91 00:15:09.656 clat percentiles (usec): 00:15:09.656 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 453], 20.00th=[ 523], 00:15:09.656 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:09.657 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:15:09.657 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:09.657 | 99.99th=[42206] 00:15:09.657 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:15:09.657 slat (nsec): min=9399, max=46267, avg=13509.20, stdev=6100.94 00:15:09.657 clat (usec): min=175, max=573, avg=286.04, stdev=56.05 00:15:09.657 lat (usec): min=185, max=607, avg=299.55, stdev=56.02 00:15:09.657 clat percentiles (usec): 00:15:09.657 | 1.00th=[ 192], 5.00th=[ 212], 10.00th=[ 231], 20.00th=[ 247], 00:15:09.657 | 30.00th=[ 260], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:15:09.657 | 70.00th=[ 293], 80.00th=[ 318], 90.00th=[ 388], 95.00th=[ 400], 00:15:09.657 | 99.00th=[ 424], 99.50th=[ 498], 99.90th=[ 570], 99.95th=[ 570], 00:15:09.657 | 99.99th=[ 570] 00:15:09.657 bw ( KiB/s): min= 4096, max= 4096, per=21.62%, avg=4096.00, stdev= 0.00, samples=1 00:15:09.657 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:09.657 lat (usec) : 250=20.96%, 500=74.21%, 750=0.93% 00:15:09.657 lat (msec) : 50=3.90% 00:15:09.657 cpu : usr=0.20%, sys=0.89%, ctx=540, majf=0, minf=2 00:15:09.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:09.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.657 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:09.657 job2: (groupid=0, jobs=1): err= 0: pid=743930: Fri Jul 12 15:52:06 2024 00:15:09.657 read: IOPS=1936, BW=7744KiB/s (7930kB/s)(7752KiB/1001msec) 00:15:09.657 slat (nsec): min=6697, max=46773, avg=10002.00, stdev=5021.00 00:15:09.657 clat (usec): min=188, max=3880, avg=262.67, stdev=99.17 00:15:09.657 lat (usec): min=195, max=3887, avg=272.67, stdev=100.50 00:15:09.657 clat percentiles (usec): 00:15:09.657 | 1.00th=[ 200], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:15:09.657 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:15:09.657 | 70.00th=[ 265], 80.00th=[ 285], 90.00th=[ 322], 95.00th=[ 355], 00:15:09.657 | 99.00th=[ 494], 99.50th=[ 510], 99.90th=[ 766], 99.95th=[ 3884], 00:15:09.657 | 99.99th=[ 3884] 00:15:09.657 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:15:09.657 slat (nsec): min=8649, max=52034, avg=12022.81, stdev=5477.09 00:15:09.657 clat (usec): min=137, max=872, avg=211.52, stdev=53.55 00:15:09.657 lat (usec): min=146, max=883, avg=223.54, stdev=55.63 00:15:09.657 clat percentiles (usec): 00:15:09.657 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 165], 00:15:09.657 | 30.00th=[ 174], 40.00th=[ 186], 50.00th=[ 202], 60.00th=[ 223], 00:15:09.657 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 281], 95.00th=[ 318], 00:15:09.657 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 416], 99.95th=[ 611], 00:15:09.657 | 99.99th=[ 873] 00:15:09.657 bw ( KiB/s): min= 8208, max= 8208, per=43.32%, avg=8208.00, stdev= 0.00, samples=1 00:15:09.657 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:15:09.657 lat (usec) : 250=70.22%, 500=29.35%, 750=0.35%, 1000=0.05% 00:15:09.657 lat (msec) : 4=0.03% 00:15:09.657 cpu : usr=2.80%, sys=6.40%, ctx=3987, majf=0, minf=1 00:15:09.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:09.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.657 issued rwts: total=1938,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:09.657 job3: (groupid=0, jobs=1): err= 0: pid=743931: Fri Jul 12 15:52:06 2024 00:15:09.657 read: IOPS=32, BW=129KiB/s (132kB/s)(132KiB/1026msec) 00:15:09.657 slat (nsec): min=8507, max=45750, avg=17640.00, stdev=10774.71 00:15:09.657 clat (usec): min=267, max=41976, avg=26315.44, stdev=19977.35 00:15:09.657 lat (usec): min=276, max=41993, avg=26333.08, stdev=19983.21 00:15:09.657 clat percentiles (usec): 00:15:09.657 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 293], 00:15:09.657 | 30.00th=[ 302], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:09.657 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:15:09.657 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:09.657 | 99.99th=[42206] 00:15:09.657 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:15:09.657 slat (nsec): min=10270, max=41633, avg=13565.94, stdev=5447.97 00:15:09.657 clat (usec): min=188, max=1975, avg=286.21, stdev=83.81 00:15:09.657 lat (usec): min=199, max=1986, avg=299.77, stdev=83.86 00:15:09.657 clat percentiles (usec): 00:15:09.657 | 1.00th=[ 202], 5.00th=[ 227], 10.00th=[ 237], 20.00th=[ 260], 00:15:09.657 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:15:09.657 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 330], 95.00th=[ 355], 00:15:09.657 | 99.00th=[ 400], 99.50th=[ 433], 99.90th=[ 1975], 99.95th=[ 1975], 00:15:09.657 | 99.99th=[ 1975] 00:15:09.657 bw ( KiB/s): min= 4096, max= 4096, per=21.62%, avg=4096.00, stdev= 0.00, samples=1 00:15:09.657 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:09.657 lat (usec) : 250=13.94%, 500=82.02% 00:15:09.657 lat (msec) : 2=0.18%, 50=3.85% 00:15:09.657 cpu : usr=0.10%, sys=0.98%, ctx=547, majf=0, minf=1 00:15:09.657 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:09.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.657 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:09.657 issued rwts: total=33,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:09.657 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:09.657 00:15:09.657 Run status group 0 (all jobs): 00:15:09.657 READ: bw=13.5MiB/s (14.1MB/s), 107KiB/s-7744KiB/s (110kB/s-7930kB/s), io=13.8MiB (14.5MB), run=1001-1026msec 00:15:09.657 WRITE: bw=18.5MiB/s (19.4MB/s), 1996KiB/s-8184KiB/s (2044kB/s-8380kB/s), io=19.0MiB (19.9MB), run=1001-1026msec 00:15:09.657 00:15:09.657 Disk stats (read/write): 00:15:09.657 nvme0n1: ios=1259/1536, merge=0/0, ticks=512/314, in_queue=826, util=86.57% 00:15:09.657 nvme0n2: ios=48/512, merge=0/0, ticks=1675/149, in_queue=1824, util=97.66% 00:15:09.657 nvme0n3: ios=1560/1976, merge=0/0, ticks=1325/412, in_queue=1737, util=98.96% 00:15:09.657 nvme0n4: ios=82/512, merge=0/0, ticks=1553/140, in_queue=1693, util=98.84% 00:15:09.657 15:52:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:09.657 [global] 00:15:09.657 thread=1 00:15:09.657 invalidate=1 00:15:09.657 rw=write 00:15:09.657 time_based=1 00:15:09.657 runtime=1 00:15:09.657 ioengine=libaio 00:15:09.657 direct=1 00:15:09.657 bs=4096 00:15:09.657 iodepth=128 00:15:09.657 norandommap=0 00:15:09.657 numjobs=1 00:15:09.657 00:15:09.657 verify_dump=1 00:15:09.657 verify_backlog=512 00:15:09.657 verify_state_save=0 00:15:09.657 do_verify=1 00:15:09.657 verify=crc32c-intel 00:15:09.657 [job0] 00:15:09.657 filename=/dev/nvme0n1 00:15:09.657 [job1] 00:15:09.657 filename=/dev/nvme0n2 00:15:09.657 [job2] 00:15:09.657 filename=/dev/nvme0n3 00:15:09.657 [job3] 00:15:09.657 filename=/dev/nvme0n4 00:15:09.657 Could not set queue depth (nvme0n1) 00:15:09.657 Could not set queue depth (nvme0n2) 00:15:09.657 Could not set queue depth (nvme0n3) 00:15:09.657 Could not set queue depth (nvme0n4) 00:15:09.657 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:09.657 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:09.657 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:09.657 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:09.657 fio-3.35 00:15:09.657 Starting 4 threads 00:15:11.030 00:15:11.030 job0: (groupid=0, jobs=1): err= 0: pid=744166: Fri Jul 12 15:52:08 2024 00:15:11.030 read: IOPS=6246, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1004msec) 00:15:11.030 slat (usec): min=2, max=8620, avg=78.36, stdev=527.81 00:15:11.030 clat (usec): min=1734, max=21214, avg=9757.59, stdev=2485.67 00:15:11.030 lat (usec): min=3478, max=21219, avg=9835.94, stdev=2514.55 00:15:11.030 clat percentiles (usec): 00:15:11.030 | 1.00th=[ 4293], 5.00th=[ 6390], 10.00th=[ 7308], 20.00th=[ 8094], 00:15:11.030 | 30.00th=[ 8586], 40.00th=[ 8848], 50.00th=[ 8979], 60.00th=[ 9503], 00:15:11.030 | 70.00th=[10290], 80.00th=[11600], 90.00th=[12911], 95.00th=[15270], 00:15:11.031 | 99.00th=[17171], 99.50th=[17695], 99.90th=[21103], 99.95th=[21103], 00:15:11.031 | 99.99th=[21103] 00:15:11.031 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:15:11.031 slat (usec): min=4, max=12048, avg=69.63, stdev=448.09 00:15:11.031 clat (usec): min=2479, max=40632, avg=9716.24, stdev=3896.91 00:15:11.031 lat (usec): min=2491, max=40667, avg=9785.86, stdev=3933.64 00:15:11.031 clat percentiles (usec): 00:15:11.031 | 1.00th=[ 3392], 5.00th=[ 5145], 10.00th=[ 6718], 20.00th=[ 8356], 00:15:11.031 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:15:11.031 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[11338], 95.00th=[14222], 00:15:11.031 | 99.00th=[28443], 99.50th=[34341], 99.90th=[34341], 99.95th=[34866], 00:15:11.031 | 99.99th=[40633] 00:15:11.031 bw ( KiB/s): min=25392, max=27848, per=47.38%, avg=26620.00, stdev=1736.65, samples=2 00:15:11.031 iops : min= 6348, max= 6962, avg=6655.00, stdev=434.16, samples=2 00:15:11.031 lat (msec) : 2=0.01%, 4=1.68%, 10=72.89%, 20=24.00%, 50=1.42% 00:15:11.031 cpu : usr=5.68%, sys=9.47%, ctx=765, majf=0, minf=7 00:15:11.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:11.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.031 issued rwts: total=6271,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.031 job1: (groupid=0, jobs=1): err= 0: pid=744167: Fri Jul 12 15:52:08 2024 00:15:11.031 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec) 00:15:11.031 slat (usec): min=2, max=68597, avg=160.94, stdev=1636.31 00:15:11.031 clat (usec): min=2911, max=96037, avg=19892.27, stdev=16609.90 00:15:11.031 lat (usec): min=3688, max=96043, avg=20053.21, stdev=16687.16 00:15:11.031 clat percentiles (usec): 00:15:11.031 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[11469], 20.00th=[11863], 00:15:11.031 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13829], 60.00th=[14877], 00:15:11.031 | 70.00th=[16909], 80.00th=[21365], 90.00th=[35390], 95.00th=[72877], 00:15:11.031 | 99.00th=[86508], 99.50th=[86508], 99.90th=[86508], 99.95th=[86508], 00:15:11.031 | 99.99th=[95945] 00:15:11.031 write: IOPS=3506, BW=13.7MiB/s (14.4MB/s)(13.9MiB/1017msec); 0 zone resets 00:15:11.031 slat (usec): min=4, max=16482, avg=134.86, stdev=783.42 00:15:11.031 clat (usec): min=2773, max=81919, avg=19030.34, stdev=9987.58 00:15:11.031 lat (usec): min=2783, max=81934, avg=19165.20, stdev=10052.31 00:15:11.031 clat percentiles (usec): 00:15:11.031 | 1.00th=[ 3785], 5.00th=[ 8979], 10.00th=[10421], 20.00th=[10945], 00:15:11.031 | 30.00th=[11731], 40.00th=[13435], 50.00th=[15139], 60.00th=[19530], 00:15:11.031 | 70.00th=[22938], 80.00th=[30278], 90.00th=[31327], 95.00th=[31589], 00:15:11.031 | 99.00th=[49021], 99.50th=[66847], 99.90th=[68682], 99.95th=[82314], 00:15:11.031 | 99.99th=[82314] 00:15:11.031 bw ( KiB/s): min=12288, max=15216, per=24.48%, avg=13752.00, stdev=2070.41, samples=2 00:15:11.031 iops : min= 3072, max= 3804, avg=3438.00, stdev=517.60, samples=2 00:15:11.031 lat (msec) : 4=0.74%, 10=6.13%, 20=61.98%, 50=27.57%, 100=3.59% 00:15:11.031 cpu : usr=2.85%, sys=3.74%, ctx=287, majf=0, minf=13 00:15:11.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:11.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.031 issued rwts: total=3072,3566,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.031 job2: (groupid=0, jobs=1): err= 0: pid=744168: Fri Jul 12 15:52:08 2024 00:15:11.031 read: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec) 00:15:11.031 slat (usec): min=2, max=32136, avg=328.44, stdev=2355.44 00:15:11.031 clat (usec): min=24033, max=89308, avg=42925.30, stdev=10695.88 00:15:11.031 lat (usec): min=24039, max=89316, avg=43253.74, stdev=10863.76 00:15:11.031 clat percentiles (usec): 00:15:11.031 | 1.00th=[23987], 5.00th=[27657], 10.00th=[30540], 20.00th=[34341], 00:15:11.031 | 30.00th=[35914], 40.00th=[40109], 50.00th=[41157], 60.00th=[43779], 00:15:11.031 | 70.00th=[45876], 80.00th=[51119], 90.00th=[58983], 95.00th=[64226], 00:15:11.031 | 99.00th=[72877], 99.50th=[78119], 99.90th=[78119], 99.95th=[89654], 00:15:11.031 | 99.99th=[89654] 00:15:11.031 write: IOPS=1989, BW=7957KiB/s (8148kB/s)(8060KiB/1013msec); 0 zone resets 00:15:11.031 slat (usec): min=4, max=30127, avg=231.60, stdev=1248.52 00:15:11.031 clat (usec): min=10662, max=72657, avg=30987.70, stdev=9439.64 00:15:11.031 lat (usec): min=12560, max=74502, avg=31219.30, stdev=9485.67 00:15:11.031 clat percentiles (usec): 00:15:11.031 | 1.00th=[14615], 5.00th=[21103], 10.00th=[22152], 20.00th=[23462], 00:15:11.031 | 30.00th=[24773], 40.00th=[29492], 50.00th=[30540], 60.00th=[31327], 00:15:11.031 | 70.00th=[31589], 80.00th=[32637], 90.00th=[42206], 95.00th=[52167], 00:15:11.031 | 99.00th=[66847], 99.50th=[68682], 99.90th=[70779], 99.95th=[72877], 00:15:11.031 | 99.99th=[72877] 00:15:11.031 bw ( KiB/s): min= 6904, max= 8192, per=13.43%, avg=7548.00, stdev=910.75, samples=2 00:15:11.031 iops : min= 1726, max= 2048, avg=1887.00, stdev=227.69, samples=2 00:15:11.031 lat (msec) : 20=1.55%, 50=85.19%, 100=13.26% 00:15:11.031 cpu : usr=1.78%, sys=2.87%, ctx=228, majf=0, minf=19 00:15:11.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:15:11.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.031 issued rwts: total=1536,2015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.031 job3: (groupid=0, jobs=1): err= 0: pid=744169: Fri Jul 12 15:52:08 2024 00:15:11.031 read: IOPS=1529, BW=6118KiB/s (6265kB/s)(6204KiB/1014msec) 00:15:11.031 slat (usec): min=2, max=55494, avg=245.81, stdev=2024.60 00:15:11.031 clat (usec): min=11403, max=86269, avg=30043.95, stdev=14202.26 00:15:11.031 lat (usec): min=11581, max=86275, avg=30289.76, stdev=14334.39 00:15:11.031 clat percentiles (usec): 00:15:11.031 | 1.00th=[11600], 5.00th=[11731], 10.00th=[12911], 20.00th=[15795], 00:15:11.031 | 30.00th=[24773], 40.00th=[28967], 50.00th=[30016], 60.00th=[33424], 00:15:11.031 | 70.00th=[34866], 80.00th=[35390], 90.00th=[40109], 95.00th=[64750], 00:15:11.031 | 99.00th=[82314], 99.50th=[84411], 99.90th=[84411], 99.95th=[86508], 00:15:11.031 | 99.99th=[86508] 00:15:11.031 write: IOPS=2019, BW=8079KiB/s (8273kB/s)(8192KiB/1014msec); 0 zone resets 00:15:11.031 slat (usec): min=4, max=39504, avg=291.76, stdev=1606.33 00:15:11.031 clat (usec): min=1392, max=154164, avg=40025.97, stdev=29325.74 00:15:11.031 lat (usec): min=1403, max=154172, avg=40317.72, stdev=29457.35 00:15:11.031 clat percentiles (msec): 00:15:11.031 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 18], 20.00th=[ 23], 00:15:11.031 | 30.00th=[ 24], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 32], 00:15:11.031 | 70.00th=[ 36], 80.00th=[ 59], 90.00th=[ 84], 95.00th=[ 107], 00:15:11.031 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 155], 99.95th=[ 155], 00:15:11.031 | 99.99th=[ 155] 00:15:11.031 bw ( KiB/s): min= 6728, max= 8752, per=13.78%, avg=7740.00, stdev=1431.18, samples=2 00:15:11.031 iops : min= 1682, max= 2188, avg=1935.00, stdev=357.80, samples=2 00:15:11.031 lat (msec) : 2=0.06%, 10=1.81%, 20=18.70%, 50=62.93%, 100=13.42% 00:15:11.031 lat (msec) : 250=3.08% 00:15:11.031 cpu : usr=1.58%, sys=3.46%, ctx=213, majf=0, minf=11 00:15:11.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:15:11.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:11.031 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:11.031 issued rwts: total=1551,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:11.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:11.031 00:15:11.031 Run status group 0 (all jobs): 00:15:11.031 READ: bw=47.7MiB/s (50.1MB/s), 6065KiB/s-24.4MiB/s (6211kB/s-25.6MB/s), io=48.6MiB (50.9MB), run=1004-1017msec 00:15:11.031 WRITE: bw=54.9MiB/s (57.5MB/s), 7957KiB/s-25.9MiB/s (8148kB/s-27.2MB/s), io=55.8MiB (58.5MB), run=1004-1017msec 00:15:11.031 00:15:11.031 Disk stats (read/write): 00:15:11.031 nvme0n1: ios=5162/5632, merge=0/0, ticks=44776/45020, in_queue=89796, util=96.59% 00:15:11.031 nvme0n2: ios=2611/2910, merge=0/0, ticks=54549/50842, in_queue=105391, util=95.13% 00:15:11.031 nvme0n3: ios=1524/1536, merge=0/0, ticks=35500/28719, in_queue=64219, util=98.12% 00:15:11.031 nvme0n4: ios=1583/1559, merge=0/0, ticks=36939/51539, in_queue=88478, util=96.53% 00:15:11.031 15:52:08 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:11.031 [global] 00:15:11.031 thread=1 00:15:11.031 invalidate=1 00:15:11.031 rw=randwrite 00:15:11.031 time_based=1 00:15:11.031 runtime=1 00:15:11.031 ioengine=libaio 00:15:11.031 direct=1 00:15:11.031 bs=4096 00:15:11.031 iodepth=128 00:15:11.031 norandommap=0 00:15:11.031 numjobs=1 00:15:11.031 00:15:11.031 verify_dump=1 00:15:11.031 verify_backlog=512 00:15:11.031 verify_state_save=0 00:15:11.031 do_verify=1 00:15:11.031 verify=crc32c-intel 00:15:11.031 [job0] 00:15:11.031 filename=/dev/nvme0n1 00:15:11.031 [job1] 00:15:11.031 filename=/dev/nvme0n2 00:15:11.031 [job2] 00:15:11.031 filename=/dev/nvme0n3 00:15:11.031 [job3] 00:15:11.031 filename=/dev/nvme0n4 00:15:11.031 Could not set queue depth (nvme0n1) 00:15:11.031 Could not set queue depth (nvme0n2) 00:15:11.031 Could not set queue depth (nvme0n3) 00:15:11.031 Could not set queue depth (nvme0n4) 00:15:11.031 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:11.031 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:11.031 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:11.031 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:11.031 fio-3.35 00:15:11.031 Starting 4 threads 00:15:12.404 00:15:12.404 job0: (groupid=0, jobs=1): err= 0: pid=744395: Fri Jul 12 15:52:09 2024 00:15:12.404 read: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec) 00:15:12.404 slat (usec): min=3, max=10801, avg=96.02, stdev=676.98 00:15:12.404 clat (usec): min=3809, max=22335, avg=11922.31, stdev=3052.10 00:15:12.404 lat (usec): min=3816, max=22342, avg=12018.33, stdev=3097.05 00:15:12.404 clat percentiles (usec): 00:15:12.404 | 1.00th=[ 4948], 5.00th=[ 8160], 10.00th=[ 9503], 20.00th=[10159], 00:15:12.404 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[11469], 00:15:12.404 | 70.00th=[11994], 80.00th=[14222], 90.00th=[16450], 95.00th=[18482], 00:15:12.404 | 99.00th=[20841], 99.50th=[21365], 99.90th=[21890], 99.95th=[22414], 00:15:12.404 | 99.99th=[22414] 00:15:12.404 write: IOPS=5812, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1010msec); 0 zone resets 00:15:12.404 slat (usec): min=4, max=8632, avg=69.19, stdev=317.68 00:15:12.404 clat (usec): min=863, max=22316, avg=10318.94, stdev=2675.23 00:15:12.404 lat (usec): min=873, max=22323, avg=10388.13, stdev=2694.55 00:15:12.404 clat percentiles (usec): 00:15:12.404 | 1.00th=[ 3261], 5.00th=[ 5080], 10.00th=[ 6587], 20.00th=[ 9110], 00:15:12.404 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:15:12.404 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12256], 95.00th=[12518], 00:15:12.404 | 99.00th=[20055], 99.50th=[20579], 99.90th=[21365], 99.95th=[21890], 00:15:12.404 | 99.99th=[22414] 00:15:12.404 bw ( KiB/s): min=21376, max=24576, per=31.14%, avg=22976.00, stdev=2262.74, samples=2 00:15:12.404 iops : min= 5344, max= 6144, avg=5744.00, stdev=565.69, samples=2 00:15:12.404 lat (usec) : 1000=0.03% 00:15:12.404 lat (msec) : 2=0.01%, 4=1.54%, 10=23.94%, 20=73.04%, 50=1.43% 00:15:12.404 cpu : usr=4.56%, sys=11.60%, ctx=695, majf=0, minf=13 00:15:12.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:12.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.404 issued rwts: total=5632,5871,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.404 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.404 job1: (groupid=0, jobs=1): err= 0: pid=744396: Fri Jul 12 15:52:09 2024 00:15:12.404 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:15:12.404 slat (usec): min=2, max=9426, avg=74.70, stdev=543.90 00:15:12.404 clat (usec): min=4076, max=23281, avg=11199.27, stdev=1942.85 00:15:12.404 lat (usec): min=4084, max=23287, avg=11273.97, stdev=2004.26 00:15:12.404 clat percentiles (usec): 00:15:12.404 | 1.00th=[ 7963], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9896], 00:15:12.404 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10945], 60.00th=[11338], 00:15:12.404 | 70.00th=[11731], 80.00th=[11994], 90.00th=[13042], 95.00th=[15008], 00:15:12.404 | 99.00th=[17957], 99.50th=[18744], 99.90th=[23200], 99.95th=[23200], 00:15:12.404 | 99.99th=[23200] 00:15:12.404 write: IOPS=6112, BW=23.9MiB/s (25.0MB/s)(23.9MiB/1002msec); 0 zone resets 00:15:12.404 slat (usec): min=3, max=9316, avg=58.60, stdev=439.10 00:15:12.404 clat (usec): min=315, max=57313, avg=10495.99, stdev=4211.74 00:15:12.404 lat (usec): min=904, max=57319, avg=10554.59, stdev=4232.09 00:15:12.404 clat percentiles (usec): 00:15:12.404 | 1.00th=[ 3195], 5.00th=[ 5538], 10.00th=[ 7177], 20.00th=[ 8356], 00:15:12.404 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10421], 60.00th=[10814], 00:15:12.404 | 70.00th=[11076], 80.00th=[11731], 90.00th=[11994], 95.00th=[14353], 00:15:12.404 | 99.00th=[31065], 99.50th=[39584], 99.90th=[52691], 99.95th=[57410], 00:15:12.404 | 99.99th=[57410] 00:15:12.404 bw ( KiB/s): min=23400, max=24576, per=32.51%, avg=23988.00, stdev=831.56, samples=2 00:15:12.404 iops : min= 5850, max= 6144, avg=5997.00, stdev=207.89, samples=2 00:15:12.404 lat (usec) : 500=0.01%, 1000=0.09% 00:15:12.404 lat (msec) : 2=0.14%, 4=0.63%, 10=29.96%, 20=68.34%, 50=0.71% 00:15:12.404 lat (msec) : 100=0.12% 00:15:12.404 cpu : usr=4.50%, sys=6.89%, ctx=396, majf=0, minf=13 00:15:12.404 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:12.404 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.404 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.404 issued rwts: total=5632,6125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.404 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.404 job2: (groupid=0, jobs=1): err= 0: pid=744412: Fri Jul 12 15:52:09 2024 00:15:12.404 read: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec) 00:15:12.404 slat (usec): min=3, max=16561, avg=142.06, stdev=920.36 00:15:12.404 clat (usec): min=9137, max=48859, avg=17300.98, stdev=5899.85 00:15:12.404 lat (usec): min=9147, max=48898, avg=17443.03, stdev=5984.89 00:15:12.404 clat percentiles (usec): 00:15:12.404 | 1.00th=[ 9634], 5.00th=[11076], 10.00th=[13042], 20.00th=[13435], 00:15:12.405 | 30.00th=[13566], 40.00th=[13698], 50.00th=[14222], 60.00th=[16450], 00:15:12.405 | 70.00th=[19006], 80.00th=[22938], 90.00th=[24511], 95.00th=[29754], 00:15:12.405 | 99.00th=[39584], 99.50th=[47973], 99.90th=[49021], 99.95th=[49021], 00:15:12.405 | 99.99th=[49021] 00:15:12.405 write: IOPS=3049, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:15:12.405 slat (usec): min=5, max=13493, avg=196.97, stdev=972.38 00:15:12.405 clat (usec): min=5664, max=71689, avg=26884.82, stdev=18602.63 00:15:12.405 lat (usec): min=6540, max=71711, avg=27081.79, stdev=18729.03 00:15:12.405 clat percentiles (usec): 00:15:12.405 | 1.00th=[ 8586], 5.00th=[12256], 10.00th=[13304], 20.00th=[14091], 00:15:12.405 | 30.00th=[14484], 40.00th=[16188], 50.00th=[18220], 60.00th=[20055], 00:15:12.405 | 70.00th=[25035], 80.00th=[44303], 90.00th=[62653], 95.00th=[66847], 00:15:12.405 | 99.00th=[70779], 99.50th=[70779], 99.90th=[71828], 99.95th=[71828], 00:15:12.405 | 99.99th=[71828] 00:15:12.405 bw ( KiB/s): min=10616, max=12936, per=15.96%, avg=11776.00, stdev=1640.49, samples=2 00:15:12.405 iops : min= 2654, max= 3234, avg=2944.00, stdev=410.12, samples=2 00:15:12.405 lat (msec) : 10=2.01%, 20=63.77%, 50=24.05%, 100=10.18% 00:15:12.405 cpu : usr=2.88%, sys=6.16%, ctx=344, majf=0, minf=11 00:15:12.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:12.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.405 issued rwts: total=2560,3071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.405 job3: (groupid=0, jobs=1): err= 0: pid=744418: Fri Jul 12 15:52:09 2024 00:15:12.405 read: IOPS=3368, BW=13.2MiB/s (13.8MB/s)(13.3MiB/1011msec) 00:15:12.405 slat (usec): min=2, max=25248, avg=137.78, stdev=959.86 00:15:12.405 clat (usec): min=2524, max=54816, avg=18064.89, stdev=6503.11 00:15:12.405 lat (usec): min=5372, max=54842, avg=18202.68, stdev=6569.22 00:15:12.405 clat percentiles (usec): 00:15:12.405 | 1.00th=[ 5473], 5.00th=[10552], 10.00th=[12387], 20.00th=[13566], 00:15:12.405 | 30.00th=[14746], 40.00th=[15533], 50.00th=[16057], 60.00th=[17171], 00:15:12.405 | 70.00th=[18744], 80.00th=[22938], 90.00th=[25560], 95.00th=[33817], 00:15:12.405 | 99.00th=[35914], 99.50th=[46400], 99.90th=[54789], 99.95th=[54789], 00:15:12.405 | 99.99th=[54789] 00:15:12.405 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:15:12.405 slat (usec): min=3, max=20796, avg=142.33, stdev=946.80 00:15:12.405 clat (usec): min=6012, max=68542, avg=18375.67, stdev=10239.80 00:15:12.405 lat (usec): min=6031, max=68560, avg=18518.00, stdev=10322.74 00:15:12.405 clat percentiles (usec): 00:15:12.405 | 1.00th=[ 6587], 5.00th=[ 8586], 10.00th=[11207], 20.00th=[13566], 00:15:12.405 | 30.00th=[14091], 40.00th=[14746], 50.00th=[15139], 60.00th=[15533], 00:15:12.405 | 70.00th=[16581], 80.00th=[23200], 90.00th=[25822], 95.00th=[43254], 00:15:12.405 | 99.00th=[65799], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:15:12.405 | 99.99th=[68682] 00:15:12.405 bw ( KiB/s): min=12336, max=16336, per=19.43%, avg=14336.00, stdev=2828.43, samples=2 00:15:12.405 iops : min= 3084, max= 4084, avg=3584.00, stdev=707.11, samples=2 00:15:12.405 lat (msec) : 4=0.01%, 10=6.41%, 20=70.21%, 50=21.73%, 100=1.63% 00:15:12.405 cpu : usr=3.27%, sys=5.45%, ctx=271, majf=0, minf=13 00:15:12.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:12.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:12.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:12.405 issued rwts: total=3406,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:12.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:12.405 00:15:12.405 Run status group 0 (all jobs): 00:15:12.405 READ: bw=66.6MiB/s (69.8MB/s), 9.93MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=67.3MiB (70.6MB), run=1002-1011msec 00:15:12.405 WRITE: bw=72.1MiB/s (75.6MB/s), 11.9MiB/s-23.9MiB/s (12.5MB/s-25.0MB/s), io=72.9MiB (76.4MB), run=1002-1011msec 00:15:12.405 00:15:12.405 Disk stats (read/write): 00:15:12.405 nvme0n1: ios=4651/5119, merge=0/0, ticks=51922/50480, in_queue=102402, util=97.70% 00:15:12.405 nvme0n2: ios=4943/5120, merge=0/0, ticks=43506/41934, in_queue=85440, util=86.16% 00:15:12.405 nvme0n3: ios=2098/2119, merge=0/0, ticks=19265/32732, in_queue=51997, util=98.22% 00:15:12.405 nvme0n4: ios=2617/2924, merge=0/0, ticks=24753/28112, in_queue=52865, util=97.36% 00:15:12.405 15:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:12.405 15:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=744611 00:15:12.405 15:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:12.405 15:52:09 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:12.405 [global] 00:15:12.405 thread=1 00:15:12.405 invalidate=1 00:15:12.405 rw=read 00:15:12.405 time_based=1 00:15:12.405 runtime=10 00:15:12.405 ioengine=libaio 00:15:12.405 direct=1 00:15:12.405 bs=4096 00:15:12.405 iodepth=1 00:15:12.405 norandommap=1 00:15:12.405 numjobs=1 00:15:12.405 00:15:12.405 [job0] 00:15:12.405 filename=/dev/nvme0n1 00:15:12.405 [job1] 00:15:12.405 filename=/dev/nvme0n2 00:15:12.405 [job2] 00:15:12.405 filename=/dev/nvme0n3 00:15:12.405 [job3] 00:15:12.405 filename=/dev/nvme0n4 00:15:12.405 Could not set queue depth (nvme0n1) 00:15:12.405 Could not set queue depth (nvme0n2) 00:15:12.405 Could not set queue depth (nvme0n3) 00:15:12.405 Could not set queue depth (nvme0n4) 00:15:12.662 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:12.662 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:12.662 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:12.662 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:12.662 fio-3.35 00:15:12.662 Starting 4 threads 00:15:15.942 15:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:15.942 15:52:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:15.942 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=303104, buflen=4096 00:15:15.942 fio: pid=744749, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:15.942 15:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:15.942 15:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:15.942 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=319488, buflen=4096 00:15:15.942 fio: pid=744748, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:16.201 15:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:16.201 15:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:16.201 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=335872, buflen=4096 00:15:16.201 fio: pid=744746, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:16.458 15:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:16.458 15:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:16.458 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=454656, buflen=4096 00:15:16.458 fio: pid=744747, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:15:16.458 00:15:16.458 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=744746: Fri Jul 12 15:52:13 2024 00:15:16.458 read: IOPS=24, BW=96.5KiB/s (98.8kB/s)(328KiB/3400msec) 00:15:16.458 slat (usec): min=11, max=14880, avg=196.35, stdev=1631.49 00:15:16.458 clat (usec): min=40795, max=41375, avg=40986.01, stdev=74.21 00:15:16.458 lat (usec): min=40816, max=55991, avg=41184.56, stdev=1656.88 00:15:16.458 clat percentiles (usec): 00:15:16.458 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:16.458 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:16.458 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:16.458 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:16.458 | 99.99th=[41157] 00:15:16.458 bw ( KiB/s): min= 96, max= 104, per=25.72%, avg=97.33, stdev= 3.27, samples=6 00:15:16.458 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:15:16.458 lat (msec) : 50=98.80% 00:15:16.458 cpu : usr=0.09%, sys=0.00%, ctx=85, majf=0, minf=1 00:15:16.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:16.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.458 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.458 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:16.458 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=744747: Fri Jul 12 15:52:13 2024 00:15:16.458 read: IOPS=30, BW=121KiB/s (124kB/s)(444KiB/3659msec) 00:15:16.458 slat (usec): min=10, max=5901, avg=73.71, stdev=555.76 00:15:16.458 clat (usec): min=389, max=50960, avg=32680.99, stdev=16546.50 00:15:16.458 lat (usec): min=408, max=50972, avg=32754.96, stdev=16576.88 00:15:16.458 clat percentiles (usec): 00:15:16.458 | 1.00th=[ 392], 5.00th=[ 465], 10.00th=[ 523], 20.00th=[ 619], 00:15:16.458 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:16.458 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:16.458 | 99.00th=[41681], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:15:16.458 | 99.99th=[51119] 00:15:16.458 bw ( KiB/s): min= 96, max= 168, per=32.35%, avg=122.14, stdev=31.33, samples=7 00:15:16.458 iops : min= 24, max= 42, avg=30.43, stdev= 7.91, samples=7 00:15:16.458 lat (usec) : 500=7.14%, 750=13.39% 00:15:16.458 lat (msec) : 50=77.68%, 100=0.89% 00:15:16.458 cpu : usr=0.00%, sys=0.14%, ctx=115, majf=0, minf=1 00:15:16.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:16.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.458 complete : 0=0.9%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.458 issued rwts: total=112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:16.458 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=744748: Fri Jul 12 15:52:13 2024 00:15:16.458 read: IOPS=25, BW=99.4KiB/s (102kB/s)(312KiB/3139msec) 00:15:16.458 slat (nsec): min=8635, max=56517, avg=17648.11, stdev=6985.66 00:15:16.458 clat (usec): min=249, max=41345, avg=39931.13, stdev=6473.07 00:15:16.458 lat (usec): min=262, max=41372, avg=39948.84, stdev=6473.40 00:15:16.458 clat percentiles (usec): 00:15:16.458 | 1.00th=[ 249], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:16.458 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:16.458 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:16.458 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:16.458 | 99.99th=[41157] 00:15:16.458 bw ( KiB/s): min= 96, max= 104, per=26.51%, avg=100.00, stdev= 4.38, samples=6 00:15:16.458 iops : min= 24, max= 26, avg=25.00, stdev= 1.10, samples=6 00:15:16.458 lat (usec) : 250=1.27%, 500=1.27% 00:15:16.458 lat (msec) : 50=96.20% 00:15:16.458 cpu : usr=0.10%, sys=0.00%, ctx=79, majf=0, minf=1 00:15:16.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:16.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.458 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.458 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:16.458 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=744749: Fri Jul 12 15:52:13 2024 00:15:16.458 read: IOPS=25, BW=102KiB/s (105kB/s)(296KiB/2899msec) 00:15:16.458 slat (nsec): min=10020, max=46671, avg=19262.15, stdev=7577.49 00:15:16.458 clat (usec): min=266, max=41889, avg=38776.99, stdev=9250.28 00:15:16.458 lat (usec): min=281, max=41900, avg=38796.29, stdev=9250.48 00:15:16.458 clat percentiles (usec): 00:15:16.458 | 1.00th=[ 269], 5.00th=[ 400], 10.00th=[40633], 20.00th=[41157], 00:15:16.458 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:16.458 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:16.458 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:16.458 | 99.99th=[41681] 00:15:16.458 bw ( KiB/s): min= 96, max= 112, per=27.04%, avg=102.40, stdev= 6.69, samples=5 00:15:16.458 iops : min= 24, max= 28, avg=25.60, stdev= 1.67, samples=5 00:15:16.458 lat (usec) : 500=5.33% 00:15:16.458 lat (msec) : 50=93.33% 00:15:16.458 cpu : usr=0.07%, sys=0.00%, ctx=75, majf=0, minf=1 00:15:16.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:16.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.458 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.458 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:16.458 00:15:16.458 Run status group 0 (all jobs): 00:15:16.458 READ: bw=377KiB/s (386kB/s), 96.5KiB/s-121KiB/s (98.8kB/s-124kB/s), io=1380KiB (1413kB), run=2899-3659msec 00:15:16.458 00:15:16.458 Disk stats (read/write): 00:15:16.458 nvme0n1: ios=80/0, merge=0/0, ticks=3281/0, in_queue=3281, util=94.94% 00:15:16.458 nvme0n2: ios=109/0, merge=0/0, ticks=3545/0, in_queue=3545, util=96.13% 00:15:16.458 nvme0n3: ios=77/0, merge=0/0, ticks=3076/0, in_queue=3076, util=96.74% 00:15:16.458 nvme0n4: ios=73/0, merge=0/0, ticks=2831/0, in_queue=2831, util=96.72% 00:15:16.715 15:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:16.715 15:52:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:16.972 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:16.972 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:17.228 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:17.228 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:17.484 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:17.484 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 744611 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:17.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:17.740 nvmf hotplug test: fio failed as expected 00:15:17.740 15:52:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:17.996 15:52:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:17.996 15:52:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:17.996 15:52:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:17.996 15:52:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:17.996 15:52:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:17.996 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:17.996 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:15:17.996 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.996 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:15:17.996 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.996 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:17.996 rmmod nvme_tcp 00:15:17.996 rmmod nvme_fabrics 00:15:18.253 rmmod nvme_keyring 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 742632 ']' 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 742632 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 742632 ']' 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 742632 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 742632 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 742632' 00:15:18.253 killing process with pid 742632 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 742632 00:15:18.253 15:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 742632 00:15:18.513 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:18.513 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:18.513 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:18.513 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:18.513 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:18.513 15:52:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:18.513 15:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:18.513 15:52:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.439 15:52:17 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:20.439 00:15:20.439 real 0m23.392s 00:15:20.439 user 1m21.855s 00:15:20.439 sys 0m6.049s 00:15:20.439 15:52:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:20.439 15:52:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.439 ************************************ 00:15:20.439 END TEST nvmf_fio_target 00:15:20.439 ************************************ 00:15:20.439 15:52:17 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:20.439 15:52:17 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:20.439 15:52:17 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:20.439 15:52:17 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:20.439 15:52:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:20.439 ************************************ 00:15:20.439 START TEST nvmf_bdevio 00:15:20.439 ************************************ 00:15:20.439 15:52:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:20.697 * Looking for test storage... 00:15:20.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:20.697 15:52:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:15:20.698 15:52:17 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:22.593 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:22.593 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:22.593 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:22.594 Found net devices under 0000:84:00.0: cvl_0_0 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:22.594 Found net devices under 0000:84:00.1: cvl_0_1 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:22.594 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:22.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:15:22.852 00:15:22.852 --- 10.0.0.2 ping statistics --- 00:15:22.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.852 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:22.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:15:22.852 00:15:22.852 --- 10.0.0.1 ping statistics --- 00:15:22.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.852 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=747389 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 747389 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 747389 ']' 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.852 15:52:19 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:22.852 [2024-07-12 15:52:20.046771] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:15:22.852 [2024-07-12 15:52:20.046868] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.852 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.852 [2024-07-12 15:52:20.113977] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:23.110 [2024-07-12 15:52:20.228357] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.110 [2024-07-12 15:52:20.228418] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.110 [2024-07-12 15:52:20.228432] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.110 [2024-07-12 15:52:20.228443] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.110 [2024-07-12 15:52:20.228453] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.110 [2024-07-12 15:52:20.228545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:23.110 [2024-07-12 15:52:20.228606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:23.110 [2024-07-12 15:52:20.228674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:23.110 [2024-07-12 15:52:20.228677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.110 [2024-07-12 15:52:20.390643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.110 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.368 Malloc0 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:23.368 [2024-07-12 15:52:20.444591] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:23.368 { 00:15:23.368 "params": { 00:15:23.368 "name": "Nvme$subsystem", 00:15:23.368 "trtype": "$TEST_TRANSPORT", 00:15:23.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:23.368 "adrfam": "ipv4", 00:15:23.368 "trsvcid": "$NVMF_PORT", 00:15:23.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:23.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:23.368 "hdgst": ${hdgst:-false}, 00:15:23.368 "ddgst": ${ddgst:-false} 00:15:23.368 }, 00:15:23.368 "method": "bdev_nvme_attach_controller" 00:15:23.368 } 00:15:23.368 EOF 00:15:23.368 )") 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:15:23.368 15:52:20 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:23.368 "params": { 00:15:23.368 "name": "Nvme1", 00:15:23.368 "trtype": "tcp", 00:15:23.368 "traddr": "10.0.0.2", 00:15:23.368 "adrfam": "ipv4", 00:15:23.368 "trsvcid": "4420", 00:15:23.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.368 "hdgst": false, 00:15:23.368 "ddgst": false 00:15:23.368 }, 00:15:23.368 "method": "bdev_nvme_attach_controller" 00:15:23.368 }' 00:15:23.368 [2024-07-12 15:52:20.493073] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:15:23.368 [2024-07-12 15:52:20.493136] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid747415 ] 00:15:23.368 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.368 [2024-07-12 15:52:20.554872] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:23.626 [2024-07-12 15:52:20.672786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.626 [2024-07-12 15:52:20.672822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:23.626 [2024-07-12 15:52:20.672826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.626 I/O targets: 00:15:23.626 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:23.626 00:15:23.626 00:15:23.626 CUnit - A unit testing framework for C - Version 2.1-3 00:15:23.626 http://cunit.sourceforge.net/ 00:15:23.626 00:15:23.626 00:15:23.626 Suite: bdevio tests on: Nvme1n1 00:15:23.883 Test: blockdev write read block ...passed 00:15:23.883 Test: blockdev write zeroes read block ...passed 00:15:23.883 Test: blockdev write zeroes read no split ...passed 00:15:23.883 Test: blockdev write zeroes read split ...passed 00:15:23.883 Test: blockdev write zeroes read split partial ...passed 00:15:23.883 Test: blockdev reset ...[2024-07-12 15:52:21.011554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:15:23.883 [2024-07-12 15:52:21.011663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x809dc0 (9): Bad file descriptor 00:15:23.883 [2024-07-12 15:52:21.066501] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:23.883 passed 00:15:23.883 Test: blockdev write read 8 blocks ...passed 00:15:23.883 Test: blockdev write read size > 128k ...passed 00:15:23.883 Test: blockdev write read invalid size ...passed 00:15:24.140 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:24.140 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:24.140 Test: blockdev write read max offset ...passed 00:15:24.140 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:24.140 Test: blockdev writev readv 8 blocks ...passed 00:15:24.140 Test: blockdev writev readv 30 x 1block ...passed 00:15:24.140 Test: blockdev writev readv block ...passed 00:15:24.140 Test: blockdev writev readv size > 128k ...passed 00:15:24.140 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:24.140 Test: blockdev comparev and writev ...[2024-07-12 15:52:21.318116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.140 [2024-07-12 15:52:21.318151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:24.140 [2024-07-12 15:52:21.318175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.140 [2024-07-12 15:52:21.318191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:24.140 [2024-07-12 15:52:21.318530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.140 [2024-07-12 15:52:21.318555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:24.140 [2024-07-12 15:52:21.318577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.140 [2024-07-12 15:52:21.318593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:24.140 [2024-07-12 15:52:21.318947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.140 [2024-07-12 15:52:21.318971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:24.140 [2024-07-12 15:52:21.318992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.140 [2024-07-12 15:52:21.319008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:24.140 [2024-07-12 15:52:21.319346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.140 [2024-07-12 15:52:21.319369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:24.140 [2024-07-12 15:52:21.319391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:24.140 [2024-07-12 15:52:21.319406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:24.140 passed 00:15:24.140 Test: blockdev nvme passthru rw ...passed 00:15:24.140 Test: blockdev nvme passthru vendor specific ...[2024-07-12 15:52:21.401019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:24.140 [2024-07-12 15:52:21.401052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:24.140 [2024-07-12 15:52:21.401198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:24.140 [2024-07-12 15:52:21.401221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:24.140 [2024-07-12 15:52:21.401364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:24.140 [2024-07-12 15:52:21.401387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:24.140 [2024-07-12 15:52:21.401529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:24.140 [2024-07-12 15:52:21.401552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:24.140 passed 00:15:24.140 Test: blockdev nvme admin passthru ...passed 00:15:24.398 Test: blockdev copy ...passed 00:15:24.398 00:15:24.398 Run Summary: Type Total Ran Passed Failed Inactive 00:15:24.398 suites 1 1 n/a 0 0 00:15:24.398 tests 23 23 23 0 0 00:15:24.398 asserts 152 152 152 0 n/a 00:15:24.398 00:15:24.398 Elapsed time = 1.111 seconds 00:15:24.398 15:52:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:24.398 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.398 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:24.656 rmmod nvme_tcp 00:15:24.656 rmmod nvme_fabrics 00:15:24.656 rmmod nvme_keyring 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 747389 ']' 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 747389 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 747389 ']' 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 747389 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 747389 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 747389' 00:15:24.656 killing process with pid 747389 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 747389 00:15:24.656 15:52:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 747389 00:15:24.915 15:52:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:24.915 15:52:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:24.915 15:52:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:24.915 15:52:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:24.915 15:52:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:24.915 15:52:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.915 15:52:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.915 15:52:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.447 15:52:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:27.447 00:15:27.447 real 0m6.428s 00:15:27.447 user 0m10.164s 00:15:27.447 sys 0m2.159s 00:15:27.447 15:52:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.447 15:52:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:27.447 ************************************ 00:15:27.447 END TEST nvmf_bdevio 00:15:27.447 ************************************ 00:15:27.447 15:52:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:27.447 15:52:24 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:27.447 15:52:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:27.447 15:52:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.447 15:52:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:27.447 ************************************ 00:15:27.447 START TEST nvmf_auth_target 00:15:27.447 ************************************ 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:27.447 * Looking for test storage... 00:15:27.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:15:27.447 15:52:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:15:29.349 Found 0000:84:00.0 (0x8086 - 0x159b) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:15:29.349 Found 0000:84:00.1 (0x8086 - 0x159b) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:15:29.349 Found net devices under 0000:84:00.0: cvl_0_0 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:15:29.349 Found net devices under 0000:84:00.1: cvl_0_1 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:29.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:29.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:15:29.349 00:15:29.349 --- 10.0.0.2 ping statistics --- 00:15:29.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.349 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:29.349 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:29.349 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:15:29.349 00:15:29.349 --- 10.0.0.1 ping statistics --- 00:15:29.349 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:29.349 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=749506 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 749506 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 749506 ']' 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.349 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=749646 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=82499779f6cc74f85ae935ea29b95dd882210015020b9018 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.U6F 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 82499779f6cc74f85ae935ea29b95dd882210015020b9018 0 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 82499779f6cc74f85ae935ea29b95dd882210015020b9018 0 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=82499779f6cc74f85ae935ea29b95dd882210015020b9018 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.U6F 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.U6F 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.U6F 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c2f3758100d0daa3527f58c2e9efb78356a8c96f89361726a616ae774725c7d4 00:15:29.606 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.210 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c2f3758100d0daa3527f58c2e9efb78356a8c96f89361726a616ae774725c7d4 3 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c2f3758100d0daa3527f58c2e9efb78356a8c96f89361726a616ae774725c7d4 3 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c2f3758100d0daa3527f58c2e9efb78356a8c96f89361726a616ae774725c7d4 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.210 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.210 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.210 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7c10946bb22afdc5df1c23c427160d19 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.UQI 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7c10946bb22afdc5df1c23c427160d19 1 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7c10946bb22afdc5df1c23c427160d19 1 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7c10946bb22afdc5df1c23c427160d19 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:29.607 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.UQI 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.UQI 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.UQI 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0a1473976d228b539621dc3a63f24d869ccfbe14fbf87b89 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.e3T 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0a1473976d228b539621dc3a63f24d869ccfbe14fbf87b89 2 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0a1473976d228b539621dc3a63f24d869ccfbe14fbf87b89 2 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0a1473976d228b539621dc3a63f24d869ccfbe14fbf87b89 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.e3T 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.e3T 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.e3T 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d118e999c7073b5c884df492d191901a10a9a1b3a24ccd82 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.esZ 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d118e999c7073b5c884df492d191901a10a9a1b3a24ccd82 2 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d118e999c7073b5c884df492d191901a10a9a1b3a24ccd82 2 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d118e999c7073b5c884df492d191901a10a9a1b3a24ccd82 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:29.864 15:52:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:29.864 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.esZ 00:15:29.864 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.esZ 00:15:29.864 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.esZ 00:15:29.864 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:29.864 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8b6c168cb9722eb8180010a151ef645e 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.IYn 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8b6c168cb9722eb8180010a151ef645e 1 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8b6c168cb9722eb8180010a151ef645e 1 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8b6c168cb9722eb8180010a151ef645e 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.IYn 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.IYn 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.IYn 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e04c95ed6c21592c0992ee738afd7a66b3b991646dca65135c2b76c847e0a7cd 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Riw 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e04c95ed6c21592c0992ee738afd7a66b3b991646dca65135c2b76c847e0a7cd 3 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e04c95ed6c21592c0992ee738afd7a66b3b991646dca65135c2b76c847e0a7cd 3 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e04c95ed6c21592c0992ee738afd7a66b3b991646dca65135c2b76c847e0a7cd 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Riw 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Riw 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Riw 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 749506 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 749506 ']' 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.865 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.122 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.122 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:30.122 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 749646 /var/tmp/host.sock 00:15:30.122 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 749646 ']' 00:15:30.122 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:15:30.122 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.122 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:30.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:30.122 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.122 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.U6F 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.U6F 00:15:30.379 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.U6F 00:15:30.636 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.210 ]] 00:15:30.636 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.210 00:15:30.636 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.636 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.636 15:52:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.636 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.210 00:15:30.636 15:52:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.210 00:15:30.894 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:30.894 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.UQI 00:15:30.894 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.894 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.894 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.894 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.UQI 00:15:30.894 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.UQI 00:15:31.152 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.e3T ]] 00:15:31.152 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.e3T 00:15:31.152 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.152 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.152 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.152 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.e3T 00:15:31.152 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.e3T 00:15:31.410 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:31.410 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.esZ 00:15:31.410 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.410 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.410 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.410 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.esZ 00:15:31.410 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.esZ 00:15:31.667 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.IYn ]] 00:15:31.667 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IYn 00:15:31.667 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.667 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.667 15:52:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.667 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IYn 00:15:31.667 15:52:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.IYn 00:15:31.925 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:31.925 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Riw 00:15:31.925 15:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.925 15:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.925 15:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.925 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Riw 00:15:31.925 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Riw 00:15:32.183 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:15:32.183 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:32.183 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.183 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.183 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:32.183 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:32.440 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:15:32.440 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.440 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:32.440 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:32.440 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:32.440 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.440 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.440 15:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.440 15:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.440 15:52:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.440 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.440 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.697 00:15:32.697 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.697 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.697 15:52:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.955 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.955 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.955 15:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.955 15:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.212 15:52:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.212 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.212 { 00:15:33.212 "cntlid": 1, 00:15:33.212 "qid": 0, 00:15:33.212 "state": "enabled", 00:15:33.212 "thread": "nvmf_tgt_poll_group_000", 00:15:33.212 "listen_address": { 00:15:33.212 "trtype": "TCP", 00:15:33.212 "adrfam": "IPv4", 00:15:33.212 "traddr": "10.0.0.2", 00:15:33.212 "trsvcid": "4420" 00:15:33.212 }, 00:15:33.212 "peer_address": { 00:15:33.212 "trtype": "TCP", 00:15:33.212 "adrfam": "IPv4", 00:15:33.212 "traddr": "10.0.0.1", 00:15:33.212 "trsvcid": "35486" 00:15:33.212 }, 00:15:33.212 "auth": { 00:15:33.212 "state": "completed", 00:15:33.212 "digest": "sha256", 00:15:33.212 "dhgroup": "null" 00:15:33.212 } 00:15:33.212 } 00:15:33.212 ]' 00:15:33.212 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.212 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.212 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.212 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:33.212 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.212 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.212 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.212 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.469 15:52:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:15:34.401 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.401 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:34.401 15:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.401 15:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.401 15:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.401 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.401 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:34.401 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:34.659 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:15:34.659 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.659 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:34.659 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:34.659 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:34.659 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.659 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.659 15:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.659 15:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.659 15:52:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.659 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.659 15:52:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.916 00:15:34.916 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.916 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.916 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.173 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.173 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.173 15:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.173 15:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.173 15:52:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.173 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.173 { 00:15:35.173 "cntlid": 3, 00:15:35.173 "qid": 0, 00:15:35.173 "state": "enabled", 00:15:35.173 "thread": "nvmf_tgt_poll_group_000", 00:15:35.173 "listen_address": { 00:15:35.173 "trtype": "TCP", 00:15:35.173 "adrfam": "IPv4", 00:15:35.173 "traddr": "10.0.0.2", 00:15:35.173 "trsvcid": "4420" 00:15:35.173 }, 00:15:35.173 "peer_address": { 00:15:35.173 "trtype": "TCP", 00:15:35.173 "adrfam": "IPv4", 00:15:35.173 "traddr": "10.0.0.1", 00:15:35.173 "trsvcid": "35756" 00:15:35.173 }, 00:15:35.173 "auth": { 00:15:35.173 "state": "completed", 00:15:35.173 "digest": "sha256", 00:15:35.173 "dhgroup": "null" 00:15:35.173 } 00:15:35.173 } 00:15:35.173 ]' 00:15:35.173 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.173 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.173 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.430 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:35.430 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.430 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.430 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.430 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.687 15:52:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:15:36.618 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.618 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:36.618 15:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.619 15:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.619 15:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.619 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.619 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:36.619 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:36.876 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:15:36.876 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.876 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:36.876 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:36.876 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:36.876 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.876 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.876 15:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.876 15:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.876 15:52:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.876 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.876 15:52:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.133 00:15:37.133 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.133 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.133 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.391 { 00:15:37.391 "cntlid": 5, 00:15:37.391 "qid": 0, 00:15:37.391 "state": "enabled", 00:15:37.391 "thread": "nvmf_tgt_poll_group_000", 00:15:37.391 "listen_address": { 00:15:37.391 "trtype": "TCP", 00:15:37.391 "adrfam": "IPv4", 00:15:37.391 "traddr": "10.0.0.2", 00:15:37.391 "trsvcid": "4420" 00:15:37.391 }, 00:15:37.391 "peer_address": { 00:15:37.391 "trtype": "TCP", 00:15:37.391 "adrfam": "IPv4", 00:15:37.391 "traddr": "10.0.0.1", 00:15:37.391 "trsvcid": "35782" 00:15:37.391 }, 00:15:37.391 "auth": { 00:15:37.391 "state": "completed", 00:15:37.391 "digest": "sha256", 00:15:37.391 "dhgroup": "null" 00:15:37.391 } 00:15:37.391 } 00:15:37.391 ]' 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.391 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.648 15:52:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:15:38.577 15:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.577 15:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:38.577 15:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.577 15:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.577 15:52:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.577 15:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.577 15:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.577 15:52:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:38.834 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:15:38.834 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.834 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:38.834 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:38.834 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:38.834 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.834 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:38.834 15:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.834 15:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.834 15:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.834 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.834 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:39.091 00:15:39.092 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.092 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.092 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.349 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.349 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.349 15:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.349 15:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.349 15:52:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.349 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.349 { 00:15:39.349 "cntlid": 7, 00:15:39.349 "qid": 0, 00:15:39.349 "state": "enabled", 00:15:39.349 "thread": "nvmf_tgt_poll_group_000", 00:15:39.349 "listen_address": { 00:15:39.349 "trtype": "TCP", 00:15:39.349 "adrfam": "IPv4", 00:15:39.349 "traddr": "10.0.0.2", 00:15:39.349 "trsvcid": "4420" 00:15:39.349 }, 00:15:39.349 "peer_address": { 00:15:39.349 "trtype": "TCP", 00:15:39.349 "adrfam": "IPv4", 00:15:39.349 "traddr": "10.0.0.1", 00:15:39.349 "trsvcid": "35810" 00:15:39.349 }, 00:15:39.349 "auth": { 00:15:39.349 "state": "completed", 00:15:39.349 "digest": "sha256", 00:15:39.349 "dhgroup": "null" 00:15:39.349 } 00:15:39.349 } 00:15:39.349 ]' 00:15:39.349 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.607 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:39.607 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.607 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:39.607 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.607 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.607 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.607 15:52:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.866 15:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:15:40.799 15:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.799 15:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:40.799 15:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.799 15:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.799 15:52:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.799 15:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.799 15:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.799 15:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:40.799 15:52:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:41.057 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:15:41.057 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.057 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:41.057 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:41.057 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:41.057 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.057 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.057 15:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.057 15:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.057 15:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.057 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.057 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.314 00:15:41.314 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.314 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.314 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.572 { 00:15:41.572 "cntlid": 9, 00:15:41.572 "qid": 0, 00:15:41.572 "state": "enabled", 00:15:41.572 "thread": "nvmf_tgt_poll_group_000", 00:15:41.572 "listen_address": { 00:15:41.572 "trtype": "TCP", 00:15:41.572 "adrfam": "IPv4", 00:15:41.572 "traddr": "10.0.0.2", 00:15:41.572 "trsvcid": "4420" 00:15:41.572 }, 00:15:41.572 "peer_address": { 00:15:41.572 "trtype": "TCP", 00:15:41.572 "adrfam": "IPv4", 00:15:41.572 "traddr": "10.0.0.1", 00:15:41.572 "trsvcid": "35838" 00:15:41.572 }, 00:15:41.572 "auth": { 00:15:41.572 "state": "completed", 00:15:41.572 "digest": "sha256", 00:15:41.572 "dhgroup": "ffdhe2048" 00:15:41.572 } 00:15:41.572 } 00:15:41.572 ]' 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.572 15:52:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.830 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:15:42.763 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.763 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:42.763 15:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.763 15:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.763 15:52:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.763 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.763 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:42.763 15:52:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:43.021 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:15:43.021 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.021 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:43.021 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:43.021 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:43.021 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.021 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.021 15:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.021 15:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.021 15:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.021 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.021 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.585 00:15:43.585 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.585 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.585 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.585 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.585 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.585 15:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.585 15:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.585 15:52:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.585 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.585 { 00:15:43.585 "cntlid": 11, 00:15:43.585 "qid": 0, 00:15:43.585 "state": "enabled", 00:15:43.585 "thread": "nvmf_tgt_poll_group_000", 00:15:43.585 "listen_address": { 00:15:43.585 "trtype": "TCP", 00:15:43.585 "adrfam": "IPv4", 00:15:43.585 "traddr": "10.0.0.2", 00:15:43.585 "trsvcid": "4420" 00:15:43.585 }, 00:15:43.586 "peer_address": { 00:15:43.586 "trtype": "TCP", 00:15:43.586 "adrfam": "IPv4", 00:15:43.586 "traddr": "10.0.0.1", 00:15:43.586 "trsvcid": "42770" 00:15:43.586 }, 00:15:43.586 "auth": { 00:15:43.586 "state": "completed", 00:15:43.586 "digest": "sha256", 00:15:43.586 "dhgroup": "ffdhe2048" 00:15:43.586 } 00:15:43.586 } 00:15:43.586 ]' 00:15:43.586 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.586 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.586 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.843 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:43.843 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.843 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.843 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.843 15:52:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.100 15:52:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:15:45.058 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.058 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:45.058 15:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.058 15:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.058 15:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.058 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.058 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:45.058 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:45.315 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:15:45.315 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.315 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:45.315 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:45.315 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:45.315 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.315 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.315 15:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.315 15:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.315 15:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.315 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.315 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.575 00:15:45.575 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.575 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.575 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.833 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.833 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.833 15:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.833 15:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.833 15:52:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.833 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.833 { 00:15:45.833 "cntlid": 13, 00:15:45.833 "qid": 0, 00:15:45.833 "state": "enabled", 00:15:45.833 "thread": "nvmf_tgt_poll_group_000", 00:15:45.833 "listen_address": { 00:15:45.833 "trtype": "TCP", 00:15:45.833 "adrfam": "IPv4", 00:15:45.833 "traddr": "10.0.0.2", 00:15:45.833 "trsvcid": "4420" 00:15:45.833 }, 00:15:45.833 "peer_address": { 00:15:45.833 "trtype": "TCP", 00:15:45.833 "adrfam": "IPv4", 00:15:45.833 "traddr": "10.0.0.1", 00:15:45.833 "trsvcid": "42806" 00:15:45.833 }, 00:15:45.833 "auth": { 00:15:45.833 "state": "completed", 00:15:45.833 "digest": "sha256", 00:15:45.833 "dhgroup": "ffdhe2048" 00:15:45.833 } 00:15:45.833 } 00:15:45.833 ]' 00:15:45.833 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.833 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.833 15:52:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.833 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.833 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.833 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.833 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.833 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.090 15:52:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:15:47.023 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.023 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:47.023 15:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.023 15:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.023 15:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.023 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.023 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:47.023 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:47.280 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:15:47.280 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.280 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:47.280 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:47.280 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:47.280 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.280 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:47.280 15:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.280 15:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.280 15:52:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.280 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.280 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.846 00:15:47.846 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.846 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.846 15:52:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.846 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.846 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.846 15:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.846 15:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.846 15:52:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.846 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.846 { 00:15:47.846 "cntlid": 15, 00:15:47.846 "qid": 0, 00:15:47.846 "state": "enabled", 00:15:47.846 "thread": "nvmf_tgt_poll_group_000", 00:15:47.846 "listen_address": { 00:15:47.846 "trtype": "TCP", 00:15:47.846 "adrfam": "IPv4", 00:15:47.846 "traddr": "10.0.0.2", 00:15:47.846 "trsvcid": "4420" 00:15:47.846 }, 00:15:47.846 "peer_address": { 00:15:47.846 "trtype": "TCP", 00:15:47.846 "adrfam": "IPv4", 00:15:47.846 "traddr": "10.0.0.1", 00:15:47.846 "trsvcid": "42842" 00:15:47.846 }, 00:15:47.846 "auth": { 00:15:47.846 "state": "completed", 00:15:47.846 "digest": "sha256", 00:15:47.846 "dhgroup": "ffdhe2048" 00:15:47.846 } 00:15:47.846 } 00:15:47.846 ]' 00:15:47.846 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.104 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.104 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.104 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:48.104 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.104 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.104 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.104 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.362 15:52:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:15:49.295 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.295 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:49.295 15:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.295 15:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.295 15:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.295 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.295 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.295 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:49.295 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:49.583 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:15:49.583 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.583 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:49.583 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:49.583 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:49.583 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.583 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.583 15:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.583 15:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.583 15:52:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.583 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.583 15:52:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.841 00:15:49.841 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:49.841 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.841 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.099 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.099 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.099 15:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.099 15:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.099 15:52:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.099 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.099 { 00:15:50.099 "cntlid": 17, 00:15:50.099 "qid": 0, 00:15:50.099 "state": "enabled", 00:15:50.099 "thread": "nvmf_tgt_poll_group_000", 00:15:50.099 "listen_address": { 00:15:50.099 "trtype": "TCP", 00:15:50.099 "adrfam": "IPv4", 00:15:50.099 "traddr": "10.0.0.2", 00:15:50.099 "trsvcid": "4420" 00:15:50.099 }, 00:15:50.099 "peer_address": { 00:15:50.099 "trtype": "TCP", 00:15:50.099 "adrfam": "IPv4", 00:15:50.099 "traddr": "10.0.0.1", 00:15:50.099 "trsvcid": "42868" 00:15:50.099 }, 00:15:50.099 "auth": { 00:15:50.099 "state": "completed", 00:15:50.099 "digest": "sha256", 00:15:50.099 "dhgroup": "ffdhe3072" 00:15:50.099 } 00:15:50.099 } 00:15:50.099 ]' 00:15:50.099 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.099 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.099 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.356 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:50.356 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.356 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.356 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.356 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.613 15:52:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.543 15:52:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.105 00:15:52.105 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.105 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.105 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.362 { 00:15:52.362 "cntlid": 19, 00:15:52.362 "qid": 0, 00:15:52.362 "state": "enabled", 00:15:52.362 "thread": "nvmf_tgt_poll_group_000", 00:15:52.362 "listen_address": { 00:15:52.362 "trtype": "TCP", 00:15:52.362 "adrfam": "IPv4", 00:15:52.362 "traddr": "10.0.0.2", 00:15:52.362 "trsvcid": "4420" 00:15:52.362 }, 00:15:52.362 "peer_address": { 00:15:52.362 "trtype": "TCP", 00:15:52.362 "adrfam": "IPv4", 00:15:52.362 "traddr": "10.0.0.1", 00:15:52.362 "trsvcid": "42894" 00:15:52.362 }, 00:15:52.362 "auth": { 00:15:52.362 "state": "completed", 00:15:52.362 "digest": "sha256", 00:15:52.362 "dhgroup": "ffdhe3072" 00:15:52.362 } 00:15:52.362 } 00:15:52.362 ]' 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.362 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.619 15:52:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:15:53.560 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.560 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:53.560 15:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.560 15:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.560 15:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.560 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.560 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.560 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:53.817 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:15:53.817 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.817 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:53.817 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:53.817 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:53.817 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.817 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.817 15:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.817 15:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.817 15:52:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.817 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.817 15:52:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:54.074 00:15:54.074 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.074 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.074 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.331 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.331 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.331 15:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.331 15:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.331 15:52:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.331 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.331 { 00:15:54.331 "cntlid": 21, 00:15:54.331 "qid": 0, 00:15:54.331 "state": "enabled", 00:15:54.331 "thread": "nvmf_tgt_poll_group_000", 00:15:54.331 "listen_address": { 00:15:54.331 "trtype": "TCP", 00:15:54.331 "adrfam": "IPv4", 00:15:54.331 "traddr": "10.0.0.2", 00:15:54.331 "trsvcid": "4420" 00:15:54.331 }, 00:15:54.331 "peer_address": { 00:15:54.331 "trtype": "TCP", 00:15:54.331 "adrfam": "IPv4", 00:15:54.331 "traddr": "10.0.0.1", 00:15:54.331 "trsvcid": "49276" 00:15:54.331 }, 00:15:54.331 "auth": { 00:15:54.331 "state": "completed", 00:15:54.331 "digest": "sha256", 00:15:54.331 "dhgroup": "ffdhe3072" 00:15:54.331 } 00:15:54.331 } 00:15:54.331 ]' 00:15:54.331 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.331 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.331 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.331 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:54.331 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.588 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.588 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.588 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.845 15:52:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:15:55.778 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.778 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:55.778 15:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.778 15:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.778 15:52:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.778 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.778 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:55.778 15:52:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:56.035 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:15:56.035 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.035 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:56.035 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:56.035 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:56.035 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.035 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:15:56.035 15:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.035 15:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.035 15:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.035 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:56.035 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:56.292 00:15:56.292 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.292 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.292 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.550 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.550 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.550 15:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.550 15:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.550 15:52:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.550 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.550 { 00:15:56.550 "cntlid": 23, 00:15:56.550 "qid": 0, 00:15:56.550 "state": "enabled", 00:15:56.550 "thread": "nvmf_tgt_poll_group_000", 00:15:56.550 "listen_address": { 00:15:56.550 "trtype": "TCP", 00:15:56.550 "adrfam": "IPv4", 00:15:56.550 "traddr": "10.0.0.2", 00:15:56.550 "trsvcid": "4420" 00:15:56.550 }, 00:15:56.550 "peer_address": { 00:15:56.550 "trtype": "TCP", 00:15:56.550 "adrfam": "IPv4", 00:15:56.550 "traddr": "10.0.0.1", 00:15:56.550 "trsvcid": "49312" 00:15:56.550 }, 00:15:56.550 "auth": { 00:15:56.550 "state": "completed", 00:15:56.550 "digest": "sha256", 00:15:56.550 "dhgroup": "ffdhe3072" 00:15:56.550 } 00:15:56.550 } 00:15:56.550 ]' 00:15:56.550 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.550 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:56.550 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.807 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:56.807 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.807 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.807 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.807 15:52:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.064 15:52:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.995 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:58.560 00:15:58.560 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.560 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.560 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.818 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.818 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.818 15:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.818 15:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.818 15:52:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.818 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.818 { 00:15:58.818 "cntlid": 25, 00:15:58.818 "qid": 0, 00:15:58.818 "state": "enabled", 00:15:58.818 "thread": "nvmf_tgt_poll_group_000", 00:15:58.818 "listen_address": { 00:15:58.818 "trtype": "TCP", 00:15:58.818 "adrfam": "IPv4", 00:15:58.818 "traddr": "10.0.0.2", 00:15:58.818 "trsvcid": "4420" 00:15:58.818 }, 00:15:58.818 "peer_address": { 00:15:58.818 "trtype": "TCP", 00:15:58.818 "adrfam": "IPv4", 00:15:58.818 "traddr": "10.0.0.1", 00:15:58.818 "trsvcid": "49328" 00:15:58.818 }, 00:15:58.818 "auth": { 00:15:58.818 "state": "completed", 00:15:58.818 "digest": "sha256", 00:15:58.818 "dhgroup": "ffdhe4096" 00:15:58.818 } 00:15:58.818 } 00:15:58.818 ]' 00:15:58.818 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.818 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:58.818 15:52:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.818 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:58.818 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.818 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.818 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.818 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.076 15:52:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:16:00.008 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.008 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:00.008 15:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.008 15:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.008 15:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.008 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.008 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:00.008 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:00.266 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:00.266 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.266 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:00.266 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:00.266 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:00.266 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.266 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.266 15:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.266 15:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.266 15:52:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.266 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.266 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:00.830 00:16:00.830 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.830 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.830 15:52:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.088 { 00:16:01.088 "cntlid": 27, 00:16:01.088 "qid": 0, 00:16:01.088 "state": "enabled", 00:16:01.088 "thread": "nvmf_tgt_poll_group_000", 00:16:01.088 "listen_address": { 00:16:01.088 "trtype": "TCP", 00:16:01.088 "adrfam": "IPv4", 00:16:01.088 "traddr": "10.0.0.2", 00:16:01.088 "trsvcid": "4420" 00:16:01.088 }, 00:16:01.088 "peer_address": { 00:16:01.088 "trtype": "TCP", 00:16:01.088 "adrfam": "IPv4", 00:16:01.088 "traddr": "10.0.0.1", 00:16:01.088 "trsvcid": "49360" 00:16:01.088 }, 00:16:01.088 "auth": { 00:16:01.088 "state": "completed", 00:16:01.088 "digest": "sha256", 00:16:01.088 "dhgroup": "ffdhe4096" 00:16:01.088 } 00:16:01.088 } 00:16:01.088 ]' 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.088 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.346 15:52:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:16:02.278 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.278 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:02.278 15:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.278 15:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.278 15:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.278 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.278 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:02.278 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:02.843 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:02.843 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.843 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:02.843 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:02.843 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:02.843 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.843 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.843 15:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.843 15:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.843 15:52:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.843 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:02.843 15:52:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.101 00:16:03.101 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.101 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.101 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.358 { 00:16:03.358 "cntlid": 29, 00:16:03.358 "qid": 0, 00:16:03.358 "state": "enabled", 00:16:03.358 "thread": "nvmf_tgt_poll_group_000", 00:16:03.358 "listen_address": { 00:16:03.358 "trtype": "TCP", 00:16:03.358 "adrfam": "IPv4", 00:16:03.358 "traddr": "10.0.0.2", 00:16:03.358 "trsvcid": "4420" 00:16:03.358 }, 00:16:03.358 "peer_address": { 00:16:03.358 "trtype": "TCP", 00:16:03.358 "adrfam": "IPv4", 00:16:03.358 "traddr": "10.0.0.1", 00:16:03.358 "trsvcid": "49386" 00:16:03.358 }, 00:16:03.358 "auth": { 00:16:03.358 "state": "completed", 00:16:03.358 "digest": "sha256", 00:16:03.358 "dhgroup": "ffdhe4096" 00:16:03.358 } 00:16:03.358 } 00:16:03.358 ]' 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.358 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.615 15:53:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:16:04.547 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.547 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:04.547 15:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.547 15:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.547 15:53:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.547 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.547 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:04.547 15:53:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:05.112 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:05.112 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.112 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:05.112 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:05.112 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:05.112 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.112 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:05.112 15:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.112 15:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.112 15:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.112 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.112 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:05.370 00:16:05.370 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.370 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.370 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.628 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.628 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.628 15:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.628 15:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.628 15:53:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.628 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.628 { 00:16:05.628 "cntlid": 31, 00:16:05.628 "qid": 0, 00:16:05.628 "state": "enabled", 00:16:05.628 "thread": "nvmf_tgt_poll_group_000", 00:16:05.628 "listen_address": { 00:16:05.628 "trtype": "TCP", 00:16:05.628 "adrfam": "IPv4", 00:16:05.628 "traddr": "10.0.0.2", 00:16:05.628 "trsvcid": "4420" 00:16:05.628 }, 00:16:05.628 "peer_address": { 00:16:05.628 "trtype": "TCP", 00:16:05.628 "adrfam": "IPv4", 00:16:05.628 "traddr": "10.0.0.1", 00:16:05.628 "trsvcid": "48596" 00:16:05.628 }, 00:16:05.628 "auth": { 00:16:05.628 "state": "completed", 00:16:05.628 "digest": "sha256", 00:16:05.628 "dhgroup": "ffdhe4096" 00:16:05.628 } 00:16:05.628 } 00:16:05.628 ]' 00:16:05.628 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.628 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.628 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.628 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.628 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.893 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.893 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.893 15:53:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.153 15:53:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:16:07.085 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.086 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.650 00:16:07.650 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.650 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.650 15:53:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.908 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.908 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.908 15:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.908 15:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.908 15:53:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.908 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.908 { 00:16:07.908 "cntlid": 33, 00:16:07.908 "qid": 0, 00:16:07.908 "state": "enabled", 00:16:07.908 "thread": "nvmf_tgt_poll_group_000", 00:16:07.908 "listen_address": { 00:16:07.908 "trtype": "TCP", 00:16:07.908 "adrfam": "IPv4", 00:16:07.908 "traddr": "10.0.0.2", 00:16:07.908 "trsvcid": "4420" 00:16:07.908 }, 00:16:07.908 "peer_address": { 00:16:07.908 "trtype": "TCP", 00:16:07.908 "adrfam": "IPv4", 00:16:07.908 "traddr": "10.0.0.1", 00:16:07.908 "trsvcid": "48616" 00:16:07.908 }, 00:16:07.908 "auth": { 00:16:07.908 "state": "completed", 00:16:07.908 "digest": "sha256", 00:16:07.908 "dhgroup": "ffdhe6144" 00:16:07.908 } 00:16:07.908 } 00:16:07.908 ]' 00:16:07.908 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.908 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.908 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.166 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:08.166 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.166 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.166 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.166 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.424 15:53:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:16:09.356 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.356 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:09.356 15:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.356 15:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.356 15:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.356 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.356 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.356 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.614 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:09.614 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.614 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.614 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:09.614 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:09.614 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.614 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.614 15:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.614 15:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.614 15:53:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.614 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:09.614 15:53:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:10.179 00:16:10.179 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.179 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.179 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.179 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.179 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.179 15:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.179 15:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.179 15:53:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.179 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.179 { 00:16:10.179 "cntlid": 35, 00:16:10.179 "qid": 0, 00:16:10.179 "state": "enabled", 00:16:10.179 "thread": "nvmf_tgt_poll_group_000", 00:16:10.179 "listen_address": { 00:16:10.179 "trtype": "TCP", 00:16:10.179 "adrfam": "IPv4", 00:16:10.179 "traddr": "10.0.0.2", 00:16:10.179 "trsvcid": "4420" 00:16:10.179 }, 00:16:10.179 "peer_address": { 00:16:10.179 "trtype": "TCP", 00:16:10.179 "adrfam": "IPv4", 00:16:10.179 "traddr": "10.0.0.1", 00:16:10.179 "trsvcid": "48636" 00:16:10.179 }, 00:16:10.179 "auth": { 00:16:10.179 "state": "completed", 00:16:10.179 "digest": "sha256", 00:16:10.179 "dhgroup": "ffdhe6144" 00:16:10.179 } 00:16:10.179 } 00:16:10.179 ]' 00:16:10.179 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.437 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.438 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.438 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.438 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.438 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.438 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.438 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.695 15:53:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:16:11.626 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.626 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:11.626 15:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.626 15:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.626 15:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.626 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.626 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.627 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.884 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:11.884 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.884 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.884 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:11.884 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:11.884 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.884 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.884 15:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.884 15:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.884 15:53:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.884 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.884 15:53:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.142 00:16:12.401 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.401 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.401 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.401 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.401 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.401 15:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.401 15:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.659 15:53:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.659 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.659 { 00:16:12.659 "cntlid": 37, 00:16:12.659 "qid": 0, 00:16:12.659 "state": "enabled", 00:16:12.659 "thread": "nvmf_tgt_poll_group_000", 00:16:12.659 "listen_address": { 00:16:12.659 "trtype": "TCP", 00:16:12.659 "adrfam": "IPv4", 00:16:12.659 "traddr": "10.0.0.2", 00:16:12.659 "trsvcid": "4420" 00:16:12.659 }, 00:16:12.659 "peer_address": { 00:16:12.659 "trtype": "TCP", 00:16:12.659 "adrfam": "IPv4", 00:16:12.659 "traddr": "10.0.0.1", 00:16:12.659 "trsvcid": "48650" 00:16:12.659 }, 00:16:12.659 "auth": { 00:16:12.659 "state": "completed", 00:16:12.659 "digest": "sha256", 00:16:12.659 "dhgroup": "ffdhe6144" 00:16:12.659 } 00:16:12.659 } 00:16:12.659 ]' 00:16:12.659 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.659 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:12.659 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.659 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:12.659 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.659 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.659 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.659 15:53:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.917 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:16:13.850 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.850 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:13.850 15:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.850 15:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.850 15:53:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.850 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.850 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:13.850 15:53:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.107 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:14.107 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.107 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:14.107 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:14.107 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:14.107 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.107 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:14.107 15:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.107 15:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.107 15:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.107 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.107 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.671 00:16:14.671 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.671 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.671 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.929 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.929 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.929 15:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.929 15:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.929 15:53:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.929 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.929 { 00:16:14.929 "cntlid": 39, 00:16:14.929 "qid": 0, 00:16:14.929 "state": "enabled", 00:16:14.929 "thread": "nvmf_tgt_poll_group_000", 00:16:14.929 "listen_address": { 00:16:14.929 "trtype": "TCP", 00:16:14.929 "adrfam": "IPv4", 00:16:14.929 "traddr": "10.0.0.2", 00:16:14.929 "trsvcid": "4420" 00:16:14.929 }, 00:16:14.929 "peer_address": { 00:16:14.929 "trtype": "TCP", 00:16:14.929 "adrfam": "IPv4", 00:16:14.929 "traddr": "10.0.0.1", 00:16:14.929 "trsvcid": "39050" 00:16:14.929 }, 00:16:14.929 "auth": { 00:16:14.929 "state": "completed", 00:16:14.929 "digest": "sha256", 00:16:14.929 "dhgroup": "ffdhe6144" 00:16:14.929 } 00:16:14.929 } 00:16:14.929 ]' 00:16:14.929 15:53:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.929 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.929 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.929 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.929 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.929 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.929 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.929 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.187 15:53:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:16:16.119 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.119 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:16.119 15:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.119 15:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.119 15:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.119 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.119 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.119 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.119 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.376 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:16.376 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.376 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:16.376 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:16.376 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:16.377 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.377 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.377 15:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.377 15:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.377 15:53:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.377 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.377 15:53:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.308 00:16:17.308 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.308 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.308 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.566 { 00:16:17.566 "cntlid": 41, 00:16:17.566 "qid": 0, 00:16:17.566 "state": "enabled", 00:16:17.566 "thread": "nvmf_tgt_poll_group_000", 00:16:17.566 "listen_address": { 00:16:17.566 "trtype": "TCP", 00:16:17.566 "adrfam": "IPv4", 00:16:17.566 "traddr": "10.0.0.2", 00:16:17.566 "trsvcid": "4420" 00:16:17.566 }, 00:16:17.566 "peer_address": { 00:16:17.566 "trtype": "TCP", 00:16:17.566 "adrfam": "IPv4", 00:16:17.566 "traddr": "10.0.0.1", 00:16:17.566 "trsvcid": "39076" 00:16:17.566 }, 00:16:17.566 "auth": { 00:16:17.566 "state": "completed", 00:16:17.566 "digest": "sha256", 00:16:17.566 "dhgroup": "ffdhe8192" 00:16:17.566 } 00:16:17.566 } 00:16:17.566 ]' 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.566 15:53:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.131 15:53:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:16:19.062 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.062 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:19.062 15:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.062 15:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.062 15:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.062 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.062 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:19.062 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:19.318 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:19.318 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.318 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:19.318 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:19.318 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:19.318 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.318 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.318 15:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.318 15:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.318 15:53:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.318 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.318 15:53:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:20.248 00:16:20.248 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.248 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.248 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.248 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.248 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.248 15:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.248 15:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.248 15:53:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.248 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.248 { 00:16:20.248 "cntlid": 43, 00:16:20.248 "qid": 0, 00:16:20.248 "state": "enabled", 00:16:20.248 "thread": "nvmf_tgt_poll_group_000", 00:16:20.248 "listen_address": { 00:16:20.248 "trtype": "TCP", 00:16:20.248 "adrfam": "IPv4", 00:16:20.248 "traddr": "10.0.0.2", 00:16:20.248 "trsvcid": "4420" 00:16:20.248 }, 00:16:20.248 "peer_address": { 00:16:20.248 "trtype": "TCP", 00:16:20.248 "adrfam": "IPv4", 00:16:20.248 "traddr": "10.0.0.1", 00:16:20.248 "trsvcid": "39102" 00:16:20.248 }, 00:16:20.248 "auth": { 00:16:20.248 "state": "completed", 00:16:20.248 "digest": "sha256", 00:16:20.248 "dhgroup": "ffdhe8192" 00:16:20.248 } 00:16:20.248 } 00:16:20.248 ]' 00:16:20.248 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.248 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.248 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.506 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.506 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.506 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.506 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.506 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.763 15:53:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:16:21.763 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.763 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:21.763 15:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.763 15:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.763 15:53:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.763 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.763 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:21.763 15:53:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:22.035 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:22.036 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.036 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:22.036 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:22.036 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:22.036 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.036 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.036 15:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.036 15:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.036 15:53:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.036 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.036 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:22.600 00:16:22.600 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.600 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.600 15:53:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.857 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.857 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.857 15:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.857 15:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.857 15:53:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.857 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.857 { 00:16:22.857 "cntlid": 45, 00:16:22.857 "qid": 0, 00:16:22.857 "state": "enabled", 00:16:22.857 "thread": "nvmf_tgt_poll_group_000", 00:16:22.857 "listen_address": { 00:16:22.857 "trtype": "TCP", 00:16:22.857 "adrfam": "IPv4", 00:16:22.857 "traddr": "10.0.0.2", 00:16:22.857 "trsvcid": "4420" 00:16:22.857 }, 00:16:22.857 "peer_address": { 00:16:22.857 "trtype": "TCP", 00:16:22.857 "adrfam": "IPv4", 00:16:22.857 "traddr": "10.0.0.1", 00:16:22.857 "trsvcid": "39122" 00:16:22.857 }, 00:16:22.857 "auth": { 00:16:22.857 "state": "completed", 00:16:22.857 "digest": "sha256", 00:16:22.857 "dhgroup": "ffdhe8192" 00:16:22.857 } 00:16:22.857 } 00:16:22.857 ]' 00:16:22.857 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.114 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:23.114 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.114 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:23.114 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.114 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.114 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.114 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.371 15:53:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:16:24.302 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.302 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:24.302 15:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.302 15:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.302 15:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.302 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.302 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:24.302 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:24.560 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:24.560 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.560 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:24.560 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:24.560 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:24.560 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.560 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:24.560 15:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.560 15:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.560 15:53:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.560 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:24.560 15:53:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.491 00:16:25.491 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.491 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.491 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.748 { 00:16:25.748 "cntlid": 47, 00:16:25.748 "qid": 0, 00:16:25.748 "state": "enabled", 00:16:25.748 "thread": "nvmf_tgt_poll_group_000", 00:16:25.748 "listen_address": { 00:16:25.748 "trtype": "TCP", 00:16:25.748 "adrfam": "IPv4", 00:16:25.748 "traddr": "10.0.0.2", 00:16:25.748 "trsvcid": "4420" 00:16:25.748 }, 00:16:25.748 "peer_address": { 00:16:25.748 "trtype": "TCP", 00:16:25.748 "adrfam": "IPv4", 00:16:25.748 "traddr": "10.0.0.1", 00:16:25.748 "trsvcid": "38490" 00:16:25.748 }, 00:16:25.748 "auth": { 00:16:25.748 "state": "completed", 00:16:25.748 "digest": "sha256", 00:16:25.748 "dhgroup": "ffdhe8192" 00:16:25.748 } 00:16:25.748 } 00:16:25.748 ]' 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.748 15:53:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.005 15:53:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:16:26.936 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.936 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:26.936 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.936 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.936 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.936 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:26.936 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.936 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.936 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:26.936 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:27.193 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:27.193 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.193 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:27.193 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:27.193 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:27.194 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.194 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.194 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.194 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.194 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.194 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.194 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.451 00:16:27.451 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.451 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.451 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.708 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.708 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.708 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.708 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.708 15:53:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.708 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.708 { 00:16:27.708 "cntlid": 49, 00:16:27.708 "qid": 0, 00:16:27.708 "state": "enabled", 00:16:27.708 "thread": "nvmf_tgt_poll_group_000", 00:16:27.708 "listen_address": { 00:16:27.708 "trtype": "TCP", 00:16:27.708 "adrfam": "IPv4", 00:16:27.708 "traddr": "10.0.0.2", 00:16:27.708 "trsvcid": "4420" 00:16:27.708 }, 00:16:27.708 "peer_address": { 00:16:27.708 "trtype": "TCP", 00:16:27.708 "adrfam": "IPv4", 00:16:27.708 "traddr": "10.0.0.1", 00:16:27.708 "trsvcid": "38524" 00:16:27.708 }, 00:16:27.708 "auth": { 00:16:27.708 "state": "completed", 00:16:27.708 "digest": "sha384", 00:16:27.708 "dhgroup": "null" 00:16:27.708 } 00:16:27.708 } 00:16:27.708 ]' 00:16:27.708 15:53:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.966 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.966 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.966 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:27.966 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.966 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.966 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.966 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.223 15:53:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:16:29.155 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.155 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:29.155 15:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.155 15:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.155 15:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.155 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.155 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.155 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:29.412 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:29.412 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.412 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:29.412 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:29.412 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:29.412 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.412 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.412 15:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.412 15:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.412 15:53:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.412 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.412 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.669 00:16:29.669 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.669 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.669 15:53:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.926 { 00:16:29.926 "cntlid": 51, 00:16:29.926 "qid": 0, 00:16:29.926 "state": "enabled", 00:16:29.926 "thread": "nvmf_tgt_poll_group_000", 00:16:29.926 "listen_address": { 00:16:29.926 "trtype": "TCP", 00:16:29.926 "adrfam": "IPv4", 00:16:29.926 "traddr": "10.0.0.2", 00:16:29.926 "trsvcid": "4420" 00:16:29.926 }, 00:16:29.926 "peer_address": { 00:16:29.926 "trtype": "TCP", 00:16:29.926 "adrfam": "IPv4", 00:16:29.926 "traddr": "10.0.0.1", 00:16:29.926 "trsvcid": "38558" 00:16:29.926 }, 00:16:29.926 "auth": { 00:16:29.926 "state": "completed", 00:16:29.926 "digest": "sha384", 00:16:29.926 "dhgroup": "null" 00:16:29.926 } 00:16:29.926 } 00:16:29.926 ]' 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.926 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.184 15:53:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:16:31.116 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.116 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.116 15:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.116 15:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.116 15:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.116 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.116 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:31.116 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:31.373 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:31.373 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.373 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:31.374 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:31.374 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:31.374 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.374 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.374 15:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.374 15:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.374 15:53:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.374 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.374 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:31.630 00:16:31.630 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.630 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.630 15:53:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.888 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.888 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.888 15:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.888 15:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.888 15:53:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.888 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.888 { 00:16:31.888 "cntlid": 53, 00:16:31.888 "qid": 0, 00:16:31.888 "state": "enabled", 00:16:31.888 "thread": "nvmf_tgt_poll_group_000", 00:16:31.888 "listen_address": { 00:16:31.888 "trtype": "TCP", 00:16:31.888 "adrfam": "IPv4", 00:16:31.888 "traddr": "10.0.0.2", 00:16:31.888 "trsvcid": "4420" 00:16:31.888 }, 00:16:31.888 "peer_address": { 00:16:31.888 "trtype": "TCP", 00:16:31.888 "adrfam": "IPv4", 00:16:31.888 "traddr": "10.0.0.1", 00:16:31.888 "trsvcid": "38592" 00:16:31.888 }, 00:16:31.888 "auth": { 00:16:31.888 "state": "completed", 00:16:31.888 "digest": "sha384", 00:16:31.888 "dhgroup": "null" 00:16:31.888 } 00:16:31.888 } 00:16:31.888 ]' 00:16:31.888 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.888 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.888 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.888 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:31.888 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.146 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.146 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.146 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.404 15:53:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:16:33.336 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.336 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:33.336 15:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.336 15:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.336 15:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.336 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.336 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:33.336 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:33.594 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:16:33.594 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.594 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:33.594 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:33.594 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:33.594 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.594 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:33.594 15:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.594 15:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.594 15:53:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.594 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.594 15:53:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:33.851 00:16:33.851 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.851 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.851 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.108 { 00:16:34.108 "cntlid": 55, 00:16:34.108 "qid": 0, 00:16:34.108 "state": "enabled", 00:16:34.108 "thread": "nvmf_tgt_poll_group_000", 00:16:34.108 "listen_address": { 00:16:34.108 "trtype": "TCP", 00:16:34.108 "adrfam": "IPv4", 00:16:34.108 "traddr": "10.0.0.2", 00:16:34.108 "trsvcid": "4420" 00:16:34.108 }, 00:16:34.108 "peer_address": { 00:16:34.108 "trtype": "TCP", 00:16:34.108 "adrfam": "IPv4", 00:16:34.108 "traddr": "10.0.0.1", 00:16:34.108 "trsvcid": "33652" 00:16:34.108 }, 00:16:34.108 "auth": { 00:16:34.108 "state": "completed", 00:16:34.108 "digest": "sha384", 00:16:34.108 "dhgroup": "null" 00:16:34.108 } 00:16:34.108 } 00:16:34.108 ]' 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.108 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.365 15:53:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:16:35.297 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.297 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:35.297 15:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.297 15:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.297 15:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.297 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.297 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.297 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.297 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:35.554 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:16:35.554 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.554 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:35.554 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:35.554 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:35.554 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.554 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.554 15:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.554 15:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.554 15:53:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.554 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.554 15:53:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.118 00:16:36.118 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.118 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.118 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.375 { 00:16:36.375 "cntlid": 57, 00:16:36.375 "qid": 0, 00:16:36.375 "state": "enabled", 00:16:36.375 "thread": "nvmf_tgt_poll_group_000", 00:16:36.375 "listen_address": { 00:16:36.375 "trtype": "TCP", 00:16:36.375 "adrfam": "IPv4", 00:16:36.375 "traddr": "10.0.0.2", 00:16:36.375 "trsvcid": "4420" 00:16:36.375 }, 00:16:36.375 "peer_address": { 00:16:36.375 "trtype": "TCP", 00:16:36.375 "adrfam": "IPv4", 00:16:36.375 "traddr": "10.0.0.1", 00:16:36.375 "trsvcid": "33678" 00:16:36.375 }, 00:16:36.375 "auth": { 00:16:36.375 "state": "completed", 00:16:36.375 "digest": "sha384", 00:16:36.375 "dhgroup": "ffdhe2048" 00:16:36.375 } 00:16:36.375 } 00:16:36.375 ]' 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.375 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.632 15:53:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:16:37.564 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.564 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:37.564 15:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.564 15:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.564 15:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.564 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.564 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.564 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:37.822 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:16:37.822 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.822 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:37.822 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:37.822 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:37.822 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.822 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.822 15:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.822 15:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.822 15:53:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.822 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.822 15:53:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.079 00:16:38.079 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.079 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.079 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.337 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.337 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.337 15:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.337 15:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.337 15:53:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.337 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.337 { 00:16:38.337 "cntlid": 59, 00:16:38.337 "qid": 0, 00:16:38.337 "state": "enabled", 00:16:38.337 "thread": "nvmf_tgt_poll_group_000", 00:16:38.337 "listen_address": { 00:16:38.337 "trtype": "TCP", 00:16:38.337 "adrfam": "IPv4", 00:16:38.337 "traddr": "10.0.0.2", 00:16:38.337 "trsvcid": "4420" 00:16:38.337 }, 00:16:38.337 "peer_address": { 00:16:38.337 "trtype": "TCP", 00:16:38.337 "adrfam": "IPv4", 00:16:38.337 "traddr": "10.0.0.1", 00:16:38.337 "trsvcid": "33696" 00:16:38.337 }, 00:16:38.337 "auth": { 00:16:38.337 "state": "completed", 00:16:38.337 "digest": "sha384", 00:16:38.337 "dhgroup": "ffdhe2048" 00:16:38.337 } 00:16:38.337 } 00:16:38.337 ]' 00:16:38.337 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.337 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.337 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.594 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.594 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.594 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.594 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.594 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.851 15:53:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:16:39.783 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.783 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:39.783 15:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.783 15:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.783 15:53:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.783 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.783 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:39.783 15:53:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:39.783 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:16:39.783 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.783 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:39.783 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:39.783 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:39.783 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.783 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.783 15:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.783 15:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.783 15:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.783 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.783 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.385 00:16:40.385 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.385 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.385 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.385 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.385 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.385 15:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.385 15:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.385 15:53:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.385 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.385 { 00:16:40.385 "cntlid": 61, 00:16:40.385 "qid": 0, 00:16:40.385 "state": "enabled", 00:16:40.385 "thread": "nvmf_tgt_poll_group_000", 00:16:40.385 "listen_address": { 00:16:40.386 "trtype": "TCP", 00:16:40.386 "adrfam": "IPv4", 00:16:40.386 "traddr": "10.0.0.2", 00:16:40.386 "trsvcid": "4420" 00:16:40.386 }, 00:16:40.386 "peer_address": { 00:16:40.386 "trtype": "TCP", 00:16:40.386 "adrfam": "IPv4", 00:16:40.386 "traddr": "10.0.0.1", 00:16:40.386 "trsvcid": "33728" 00:16:40.386 }, 00:16:40.386 "auth": { 00:16:40.386 "state": "completed", 00:16:40.386 "digest": "sha384", 00:16:40.386 "dhgroup": "ffdhe2048" 00:16:40.386 } 00:16:40.386 } 00:16:40.386 ]' 00:16:40.386 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.644 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:40.644 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.644 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:40.644 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.644 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.644 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.644 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.901 15:53:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:16:41.833 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.833 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:41.833 15:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.833 15:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.833 15:53:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.833 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.833 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:41.833 15:53:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:41.833 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:16:41.833 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.833 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:41.833 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:41.833 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:41.833 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.833 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:41.833 15:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.833 15:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.833 15:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.833 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:41.833 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:42.400 00:16:42.400 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.400 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.400 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.400 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.400 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.400 15:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.400 15:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.400 15:53:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.400 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.400 { 00:16:42.400 "cntlid": 63, 00:16:42.400 "qid": 0, 00:16:42.400 "state": "enabled", 00:16:42.400 "thread": "nvmf_tgt_poll_group_000", 00:16:42.400 "listen_address": { 00:16:42.400 "trtype": "TCP", 00:16:42.400 "adrfam": "IPv4", 00:16:42.400 "traddr": "10.0.0.2", 00:16:42.400 "trsvcid": "4420" 00:16:42.400 }, 00:16:42.400 "peer_address": { 00:16:42.400 "trtype": "TCP", 00:16:42.400 "adrfam": "IPv4", 00:16:42.400 "traddr": "10.0.0.1", 00:16:42.400 "trsvcid": "33764" 00:16:42.400 }, 00:16:42.400 "auth": { 00:16:42.400 "state": "completed", 00:16:42.400 "digest": "sha384", 00:16:42.400 "dhgroup": "ffdhe2048" 00:16:42.400 } 00:16:42.400 } 00:16:42.400 ]' 00:16:42.400 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.658 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.658 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.658 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:42.658 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.658 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.658 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.658 15:53:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.920 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:16:43.851 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.851 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:43.851 15:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.851 15:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.851 15:53:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.851 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:43.851 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.851 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:43.851 15:53:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:44.107 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:16:44.107 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.107 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:44.107 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:44.107 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:44.107 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.107 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.108 15:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.108 15:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.108 15:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.108 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.108 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.365 00:16:44.365 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.365 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.365 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.654 { 00:16:44.654 "cntlid": 65, 00:16:44.654 "qid": 0, 00:16:44.654 "state": "enabled", 00:16:44.654 "thread": "nvmf_tgt_poll_group_000", 00:16:44.654 "listen_address": { 00:16:44.654 "trtype": "TCP", 00:16:44.654 "adrfam": "IPv4", 00:16:44.654 "traddr": "10.0.0.2", 00:16:44.654 "trsvcid": "4420" 00:16:44.654 }, 00:16:44.654 "peer_address": { 00:16:44.654 "trtype": "TCP", 00:16:44.654 "adrfam": "IPv4", 00:16:44.654 "traddr": "10.0.0.1", 00:16:44.654 "trsvcid": "37626" 00:16:44.654 }, 00:16:44.654 "auth": { 00:16:44.654 "state": "completed", 00:16:44.654 "digest": "sha384", 00:16:44.654 "dhgroup": "ffdhe3072" 00:16:44.654 } 00:16:44.654 } 00:16:44.654 ]' 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.654 15:53:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.911 15:53:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:16:45.843 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.843 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:45.843 15:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.843 15:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.843 15:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.843 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.843 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:45.843 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:46.100 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:16:46.100 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.100 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:46.100 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:46.100 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:46.100 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.100 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.100 15:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.100 15:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.100 15:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.100 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.100 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.357 00:16:46.357 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.357 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.357 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.614 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.614 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.614 15:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.614 15:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.614 15:53:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.614 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.614 { 00:16:46.614 "cntlid": 67, 00:16:46.614 "qid": 0, 00:16:46.614 "state": "enabled", 00:16:46.614 "thread": "nvmf_tgt_poll_group_000", 00:16:46.614 "listen_address": { 00:16:46.614 "trtype": "TCP", 00:16:46.614 "adrfam": "IPv4", 00:16:46.614 "traddr": "10.0.0.2", 00:16:46.614 "trsvcid": "4420" 00:16:46.614 }, 00:16:46.614 "peer_address": { 00:16:46.614 "trtype": "TCP", 00:16:46.614 "adrfam": "IPv4", 00:16:46.614 "traddr": "10.0.0.1", 00:16:46.614 "trsvcid": "37654" 00:16:46.614 }, 00:16:46.614 "auth": { 00:16:46.614 "state": "completed", 00:16:46.614 "digest": "sha384", 00:16:46.614 "dhgroup": "ffdhe3072" 00:16:46.614 } 00:16:46.614 } 00:16:46.614 ]' 00:16:46.614 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.871 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.871 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.871 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.871 15:53:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.871 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.871 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.871 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.128 15:53:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:16:48.058 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.058 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:48.058 15:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.058 15:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.058 15:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.058 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.058 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:48.058 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:48.316 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:16:48.316 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.316 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:48.316 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:48.316 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:48.316 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.316 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.316 15:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.316 15:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.316 15:53:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.316 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.316 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.574 00:16:48.574 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.574 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.574 15:53:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.831 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.831 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.831 15:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.831 15:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.831 15:53:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.831 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.831 { 00:16:48.831 "cntlid": 69, 00:16:48.831 "qid": 0, 00:16:48.831 "state": "enabled", 00:16:48.831 "thread": "nvmf_tgt_poll_group_000", 00:16:48.831 "listen_address": { 00:16:48.831 "trtype": "TCP", 00:16:48.831 "adrfam": "IPv4", 00:16:48.831 "traddr": "10.0.0.2", 00:16:48.831 "trsvcid": "4420" 00:16:48.831 }, 00:16:48.831 "peer_address": { 00:16:48.831 "trtype": "TCP", 00:16:48.831 "adrfam": "IPv4", 00:16:48.831 "traddr": "10.0.0.1", 00:16:48.831 "trsvcid": "37696" 00:16:48.831 }, 00:16:48.831 "auth": { 00:16:48.831 "state": "completed", 00:16:48.831 "digest": "sha384", 00:16:48.831 "dhgroup": "ffdhe3072" 00:16:48.831 } 00:16:48.831 } 00:16:48.831 ]' 00:16:48.831 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.831 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.831 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.831 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.831 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.088 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.088 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.088 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.345 15:53:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:16:50.278 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.278 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:50.278 15:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.278 15:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.278 15:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.278 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.278 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:50.278 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:50.535 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:16:50.535 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.535 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:50.535 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:50.535 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:50.535 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.535 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:50.535 15:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.535 15:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.535 15:53:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.535 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.535 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:50.793 00:16:50.793 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.793 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.793 15:53:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.050 { 00:16:51.050 "cntlid": 71, 00:16:51.050 "qid": 0, 00:16:51.050 "state": "enabled", 00:16:51.050 "thread": "nvmf_tgt_poll_group_000", 00:16:51.050 "listen_address": { 00:16:51.050 "trtype": "TCP", 00:16:51.050 "adrfam": "IPv4", 00:16:51.050 "traddr": "10.0.0.2", 00:16:51.050 "trsvcid": "4420" 00:16:51.050 }, 00:16:51.050 "peer_address": { 00:16:51.050 "trtype": "TCP", 00:16:51.050 "adrfam": "IPv4", 00:16:51.050 "traddr": "10.0.0.1", 00:16:51.050 "trsvcid": "37728" 00:16:51.050 }, 00:16:51.050 "auth": { 00:16:51.050 "state": "completed", 00:16:51.050 "digest": "sha384", 00:16:51.050 "dhgroup": "ffdhe3072" 00:16:51.050 } 00:16:51.050 } 00:16:51.050 ]' 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.050 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.307 15:53:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:16:52.238 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.238 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:52.238 15:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.238 15:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.238 15:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.238 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.238 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.238 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:52.238 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:52.495 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:16:52.495 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.495 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:52.495 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:52.495 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:52.495 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.495 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.495 15:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.495 15:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.752 15:53:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.752 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.752 15:53:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:53.009 00:16:53.009 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.009 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.009 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.266 { 00:16:53.266 "cntlid": 73, 00:16:53.266 "qid": 0, 00:16:53.266 "state": "enabled", 00:16:53.266 "thread": "nvmf_tgt_poll_group_000", 00:16:53.266 "listen_address": { 00:16:53.266 "trtype": "TCP", 00:16:53.266 "adrfam": "IPv4", 00:16:53.266 "traddr": "10.0.0.2", 00:16:53.266 "trsvcid": "4420" 00:16:53.266 }, 00:16:53.266 "peer_address": { 00:16:53.266 "trtype": "TCP", 00:16:53.266 "adrfam": "IPv4", 00:16:53.266 "traddr": "10.0.0.1", 00:16:53.266 "trsvcid": "37762" 00:16:53.266 }, 00:16:53.266 "auth": { 00:16:53.266 "state": "completed", 00:16:53.266 "digest": "sha384", 00:16:53.266 "dhgroup": "ffdhe4096" 00:16:53.266 } 00:16:53.266 } 00:16:53.266 ]' 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.266 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.524 15:53:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:16:54.455 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.455 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:54.455 15:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.455 15:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.455 15:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.455 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.455 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.455 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:54.712 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:54.712 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.712 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:54.712 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:54.712 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:54.712 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.712 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.712 15:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.712 15:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.712 15:53:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.712 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.712 15:53:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.276 00:16:55.276 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.276 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.276 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.276 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.276 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.276 15:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.276 15:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.276 15:53:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.276 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:55.276 { 00:16:55.276 "cntlid": 75, 00:16:55.276 "qid": 0, 00:16:55.276 "state": "enabled", 00:16:55.276 "thread": "nvmf_tgt_poll_group_000", 00:16:55.276 "listen_address": { 00:16:55.276 "trtype": "TCP", 00:16:55.276 "adrfam": "IPv4", 00:16:55.276 "traddr": "10.0.0.2", 00:16:55.276 "trsvcid": "4420" 00:16:55.276 }, 00:16:55.276 "peer_address": { 00:16:55.276 "trtype": "TCP", 00:16:55.276 "adrfam": "IPv4", 00:16:55.276 "traddr": "10.0.0.1", 00:16:55.276 "trsvcid": "55388" 00:16:55.276 }, 00:16:55.276 "auth": { 00:16:55.276 "state": "completed", 00:16:55.276 "digest": "sha384", 00:16:55.276 "dhgroup": "ffdhe4096" 00:16:55.276 } 00:16:55.276 } 00:16:55.276 ]' 00:16:55.276 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:55.533 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:55.533 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:55.533 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:55.533 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:55.533 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.533 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.533 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.790 15:53:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:16:56.721 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.721 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:56.721 15:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.721 15:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.721 15:53:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.721 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.721 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:56.721 15:53:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:56.979 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:56.979 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.979 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:56.979 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:56.979 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:56.979 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.979 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.979 15:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.979 15:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.979 15:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.979 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.979 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.236 00:16:57.236 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.236 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.236 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.493 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.494 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.494 15:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.494 15:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.494 15:53:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.494 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.494 { 00:16:57.494 "cntlid": 77, 00:16:57.494 "qid": 0, 00:16:57.494 "state": "enabled", 00:16:57.494 "thread": "nvmf_tgt_poll_group_000", 00:16:57.494 "listen_address": { 00:16:57.494 "trtype": "TCP", 00:16:57.494 "adrfam": "IPv4", 00:16:57.494 "traddr": "10.0.0.2", 00:16:57.494 "trsvcid": "4420" 00:16:57.494 }, 00:16:57.494 "peer_address": { 00:16:57.494 "trtype": "TCP", 00:16:57.494 "adrfam": "IPv4", 00:16:57.494 "traddr": "10.0.0.1", 00:16:57.494 "trsvcid": "55412" 00:16:57.494 }, 00:16:57.494 "auth": { 00:16:57.494 "state": "completed", 00:16:57.494 "digest": "sha384", 00:16:57.494 "dhgroup": "ffdhe4096" 00:16:57.494 } 00:16:57.494 } 00:16:57.494 ]' 00:16:57.494 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.494 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.494 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.494 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.494 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.751 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.751 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.751 15:53:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.008 15:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:16:58.937 15:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.938 15:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:58.938 15:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.938 15:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.938 15:53:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.938 15:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.938 15:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:58.938 15:53:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:59.195 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:59.195 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.195 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:59.195 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:59.195 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.195 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.195 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:59.195 15:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.195 15:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.195 15:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.195 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.195 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.452 00:16:59.452 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.452 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.452 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.709 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.709 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.709 15:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.709 15:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.709 15:53:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.709 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.709 { 00:16:59.709 "cntlid": 79, 00:16:59.709 "qid": 0, 00:16:59.709 "state": "enabled", 00:16:59.709 "thread": "nvmf_tgt_poll_group_000", 00:16:59.709 "listen_address": { 00:16:59.709 "trtype": "TCP", 00:16:59.709 "adrfam": "IPv4", 00:16:59.709 "traddr": "10.0.0.2", 00:16:59.709 "trsvcid": "4420" 00:16:59.709 }, 00:16:59.709 "peer_address": { 00:16:59.709 "trtype": "TCP", 00:16:59.709 "adrfam": "IPv4", 00:16:59.709 "traddr": "10.0.0.1", 00:16:59.709 "trsvcid": "55444" 00:16:59.709 }, 00:16:59.709 "auth": { 00:16:59.709 "state": "completed", 00:16:59.709 "digest": "sha384", 00:16:59.709 "dhgroup": "ffdhe4096" 00:16:59.709 } 00:16:59.709 } 00:16:59.709 ]' 00:16:59.709 15:53:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.709 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.966 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.967 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.967 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.967 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.967 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.967 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.224 15:53:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:17:01.156 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.156 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.156 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:01.156 15:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.156 15:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.156 15:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.156 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.156 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.156 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:01.156 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:01.414 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:01.414 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.414 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:01.414 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:01.414 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:01.414 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.414 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.414 15:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.414 15:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.414 15:53:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.414 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.414 15:53:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.977 00:17:01.977 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.977 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.977 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.234 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.234 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.234 15:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.234 15:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.234 15:53:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.234 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.234 { 00:17:02.234 "cntlid": 81, 00:17:02.234 "qid": 0, 00:17:02.234 "state": "enabled", 00:17:02.234 "thread": "nvmf_tgt_poll_group_000", 00:17:02.234 "listen_address": { 00:17:02.234 "trtype": "TCP", 00:17:02.234 "adrfam": "IPv4", 00:17:02.234 "traddr": "10.0.0.2", 00:17:02.234 "trsvcid": "4420" 00:17:02.234 }, 00:17:02.234 "peer_address": { 00:17:02.234 "trtype": "TCP", 00:17:02.234 "adrfam": "IPv4", 00:17:02.234 "traddr": "10.0.0.1", 00:17:02.234 "trsvcid": "55474" 00:17:02.234 }, 00:17:02.234 "auth": { 00:17:02.234 "state": "completed", 00:17:02.234 "digest": "sha384", 00:17:02.234 "dhgroup": "ffdhe6144" 00:17:02.234 } 00:17:02.234 } 00:17:02.234 ]' 00:17:02.234 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.234 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.234 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.490 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.490 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.490 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.490 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.490 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.748 15:53:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:17:03.679 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.679 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:03.679 15:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.679 15:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.679 15:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.679 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.679 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.679 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:03.937 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:03.937 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.937 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.937 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:03.937 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:03.937 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.937 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.937 15:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.937 15:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.937 15:54:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.937 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.937 15:54:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.502 00:17:04.502 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.502 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.502 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.502 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.502 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.502 15:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.502 15:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.502 15:54:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.502 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.502 { 00:17:04.502 "cntlid": 83, 00:17:04.502 "qid": 0, 00:17:04.502 "state": "enabled", 00:17:04.502 "thread": "nvmf_tgt_poll_group_000", 00:17:04.502 "listen_address": { 00:17:04.502 "trtype": "TCP", 00:17:04.502 "adrfam": "IPv4", 00:17:04.502 "traddr": "10.0.0.2", 00:17:04.502 "trsvcid": "4420" 00:17:04.502 }, 00:17:04.502 "peer_address": { 00:17:04.502 "trtype": "TCP", 00:17:04.502 "adrfam": "IPv4", 00:17:04.502 "traddr": "10.0.0.1", 00:17:04.502 "trsvcid": "57144" 00:17:04.502 }, 00:17:04.502 "auth": { 00:17:04.502 "state": "completed", 00:17:04.502 "digest": "sha384", 00:17:04.502 "dhgroup": "ffdhe6144" 00:17:04.502 } 00:17:04.502 } 00:17:04.502 ]' 00:17:04.502 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.790 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.790 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.790 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.790 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.790 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.790 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.790 15:54:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.072 15:54:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:17:06.002 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.002 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:06.002 15:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.002 15:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.002 15:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.002 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.002 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:06.002 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:06.260 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:06.260 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.260 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.260 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:06.260 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:06.260 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.260 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.260 15:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.260 15:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.260 15:54:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.260 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.260 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.825 00:17:06.825 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.825 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.825 15:54:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.825 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.825 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.825 15:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.825 15:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.825 15:54:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.825 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.825 { 00:17:06.825 "cntlid": 85, 00:17:06.825 "qid": 0, 00:17:06.825 "state": "enabled", 00:17:06.825 "thread": "nvmf_tgt_poll_group_000", 00:17:06.825 "listen_address": { 00:17:06.825 "trtype": "TCP", 00:17:06.825 "adrfam": "IPv4", 00:17:06.825 "traddr": "10.0.0.2", 00:17:06.825 "trsvcid": "4420" 00:17:06.825 }, 00:17:06.825 "peer_address": { 00:17:06.825 "trtype": "TCP", 00:17:06.825 "adrfam": "IPv4", 00:17:06.825 "traddr": "10.0.0.1", 00:17:06.825 "trsvcid": "57174" 00:17:06.825 }, 00:17:06.825 "auth": { 00:17:06.825 "state": "completed", 00:17:06.825 "digest": "sha384", 00:17:06.825 "dhgroup": "ffdhe6144" 00:17:06.825 } 00:17:06.825 } 00:17:06.825 ]' 00:17:06.825 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.081 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.081 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.081 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:07.081 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.081 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.081 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.081 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.337 15:54:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:17:08.265 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.265 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:08.265 15:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.265 15:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.265 15:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.265 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.265 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.265 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.521 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:08.521 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.521 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.521 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:08.521 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:08.521 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.521 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:08.521 15:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.521 15:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.521 15:54:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.521 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.521 15:54:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.096 00:17:09.096 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.096 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.096 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.353 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.353 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.353 15:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.353 15:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.353 15:54:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.353 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.353 { 00:17:09.353 "cntlid": 87, 00:17:09.353 "qid": 0, 00:17:09.353 "state": "enabled", 00:17:09.353 "thread": "nvmf_tgt_poll_group_000", 00:17:09.353 "listen_address": { 00:17:09.353 "trtype": "TCP", 00:17:09.353 "adrfam": "IPv4", 00:17:09.354 "traddr": "10.0.0.2", 00:17:09.354 "trsvcid": "4420" 00:17:09.354 }, 00:17:09.354 "peer_address": { 00:17:09.354 "trtype": "TCP", 00:17:09.354 "adrfam": "IPv4", 00:17:09.354 "traddr": "10.0.0.1", 00:17:09.354 "trsvcid": "57202" 00:17:09.354 }, 00:17:09.354 "auth": { 00:17:09.354 "state": "completed", 00:17:09.354 "digest": "sha384", 00:17:09.354 "dhgroup": "ffdhe6144" 00:17:09.354 } 00:17:09.354 } 00:17:09.354 ]' 00:17:09.354 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.354 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.354 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.354 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.354 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.354 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.354 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.354 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.918 15:54:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:17:10.483 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.741 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:10.741 15:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.741 15:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.741 15:54:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.741 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.741 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:10.741 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:10.741 15:54:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:10.998 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:10.998 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.998 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:10.998 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:10.998 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:10.998 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.998 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.998 15:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.998 15:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.998 15:54:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.998 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.998 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.562 00:17:11.820 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.820 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.820 15:54:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.078 { 00:17:12.078 "cntlid": 89, 00:17:12.078 "qid": 0, 00:17:12.078 "state": "enabled", 00:17:12.078 "thread": "nvmf_tgt_poll_group_000", 00:17:12.078 "listen_address": { 00:17:12.078 "trtype": "TCP", 00:17:12.078 "adrfam": "IPv4", 00:17:12.078 "traddr": "10.0.0.2", 00:17:12.078 "trsvcid": "4420" 00:17:12.078 }, 00:17:12.078 "peer_address": { 00:17:12.078 "trtype": "TCP", 00:17:12.078 "adrfam": "IPv4", 00:17:12.078 "traddr": "10.0.0.1", 00:17:12.078 "trsvcid": "57236" 00:17:12.078 }, 00:17:12.078 "auth": { 00:17:12.078 "state": "completed", 00:17:12.078 "digest": "sha384", 00:17:12.078 "dhgroup": "ffdhe8192" 00:17:12.078 } 00:17:12.078 } 00:17:12.078 ]' 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.078 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.335 15:54:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:17:13.266 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.266 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:13.266 15:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.266 15:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.266 15:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.266 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.266 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:13.266 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:13.523 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:13.523 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.523 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.523 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:13.523 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:13.523 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.523 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.523 15:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.523 15:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.523 15:54:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.523 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.523 15:54:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.457 00:17:14.457 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.457 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.457 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.457 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.457 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.457 15:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.457 15:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.457 15:54:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.457 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.457 { 00:17:14.457 "cntlid": 91, 00:17:14.457 "qid": 0, 00:17:14.457 "state": "enabled", 00:17:14.457 "thread": "nvmf_tgt_poll_group_000", 00:17:14.457 "listen_address": { 00:17:14.457 "trtype": "TCP", 00:17:14.457 "adrfam": "IPv4", 00:17:14.457 "traddr": "10.0.0.2", 00:17:14.457 "trsvcid": "4420" 00:17:14.457 }, 00:17:14.457 "peer_address": { 00:17:14.457 "trtype": "TCP", 00:17:14.457 "adrfam": "IPv4", 00:17:14.457 "traddr": "10.0.0.1", 00:17:14.457 "trsvcid": "42724" 00:17:14.457 }, 00:17:14.457 "auth": { 00:17:14.457 "state": "completed", 00:17:14.457 "digest": "sha384", 00:17:14.457 "dhgroup": "ffdhe8192" 00:17:14.457 } 00:17:14.457 } 00:17:14.457 ]' 00:17:14.457 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.714 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.714 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.714 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.714 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.714 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.714 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.714 15:54:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.972 15:54:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:17:15.904 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.904 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:15.904 15:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.904 15:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.904 15:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.904 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.904 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:15.904 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:16.162 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:16.162 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.162 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:16.162 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:16.162 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:16.162 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.162 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.162 15:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.162 15:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.162 15:54:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.162 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.162 15:54:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:17.094 00:17:17.094 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.094 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.094 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.094 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.094 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.094 15:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.094 15:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.094 15:54:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.094 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.094 { 00:17:17.094 "cntlid": 93, 00:17:17.094 "qid": 0, 00:17:17.094 "state": "enabled", 00:17:17.095 "thread": "nvmf_tgt_poll_group_000", 00:17:17.095 "listen_address": { 00:17:17.095 "trtype": "TCP", 00:17:17.095 "adrfam": "IPv4", 00:17:17.095 "traddr": "10.0.0.2", 00:17:17.095 "trsvcid": "4420" 00:17:17.095 }, 00:17:17.095 "peer_address": { 00:17:17.095 "trtype": "TCP", 00:17:17.095 "adrfam": "IPv4", 00:17:17.095 "traddr": "10.0.0.1", 00:17:17.095 "trsvcid": "42754" 00:17:17.095 }, 00:17:17.095 "auth": { 00:17:17.095 "state": "completed", 00:17:17.095 "digest": "sha384", 00:17:17.095 "dhgroup": "ffdhe8192" 00:17:17.095 } 00:17:17.095 } 00:17:17.095 ]' 00:17:17.095 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.351 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.351 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.351 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.351 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.351 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.351 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.351 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.609 15:54:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:17:18.541 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.541 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:18.541 15:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.541 15:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.541 15:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.541 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.541 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:18.541 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:18.799 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:18.799 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.799 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:18.799 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:18.799 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.799 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.799 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:18.799 15:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.799 15:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.799 15:54:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.799 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.799 15:54:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.733 00:17:19.733 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.733 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.733 15:54:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.733 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.733 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.733 15:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.733 15:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.990 15:54:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.990 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.990 { 00:17:19.990 "cntlid": 95, 00:17:19.990 "qid": 0, 00:17:19.990 "state": "enabled", 00:17:19.990 "thread": "nvmf_tgt_poll_group_000", 00:17:19.990 "listen_address": { 00:17:19.990 "trtype": "TCP", 00:17:19.990 "adrfam": "IPv4", 00:17:19.990 "traddr": "10.0.0.2", 00:17:19.990 "trsvcid": "4420" 00:17:19.990 }, 00:17:19.990 "peer_address": { 00:17:19.990 "trtype": "TCP", 00:17:19.990 "adrfam": "IPv4", 00:17:19.990 "traddr": "10.0.0.1", 00:17:19.990 "trsvcid": "42774" 00:17:19.990 }, 00:17:19.990 "auth": { 00:17:19.990 "state": "completed", 00:17:19.990 "digest": "sha384", 00:17:19.990 "dhgroup": "ffdhe8192" 00:17:19.990 } 00:17:19.990 } 00:17:19.990 ]' 00:17:19.990 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.990 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.990 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.990 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.990 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.990 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.990 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.990 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.247 15:54:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:17:21.180 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.180 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:21.180 15:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.180 15:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.180 15:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.180 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:21.180 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.180 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.180 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:21.180 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:21.438 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:21.438 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.438 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:21.438 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:21.438 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:21.438 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.438 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.438 15:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.438 15:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.438 15:54:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.438 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.438 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.695 00:17:21.695 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.695 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.695 15:54:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.951 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.951 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.951 15:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.951 15:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.951 15:54:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.951 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.951 { 00:17:21.951 "cntlid": 97, 00:17:21.951 "qid": 0, 00:17:21.951 "state": "enabled", 00:17:21.951 "thread": "nvmf_tgt_poll_group_000", 00:17:21.951 "listen_address": { 00:17:21.951 "trtype": "TCP", 00:17:21.951 "adrfam": "IPv4", 00:17:21.951 "traddr": "10.0.0.2", 00:17:21.951 "trsvcid": "4420" 00:17:21.951 }, 00:17:21.951 "peer_address": { 00:17:21.951 "trtype": "TCP", 00:17:21.951 "adrfam": "IPv4", 00:17:21.951 "traddr": "10.0.0.1", 00:17:21.951 "trsvcid": "42786" 00:17:21.951 }, 00:17:21.951 "auth": { 00:17:21.951 "state": "completed", 00:17:21.951 "digest": "sha512", 00:17:21.951 "dhgroup": "null" 00:17:21.951 } 00:17:21.951 } 00:17:21.951 ]' 00:17:21.951 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.952 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.952 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.952 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:21.952 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.952 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.952 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.952 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.209 15:54:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:17:23.141 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.141 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:23.141 15:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.141 15:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.141 15:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.141 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.141 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.141 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:23.398 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:23.398 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.398 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.398 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:23.398 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:23.398 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.398 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.398 15:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.398 15:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.398 15:54:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.398 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.398 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.656 00:17:23.656 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.656 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.656 15:54:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.913 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.913 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.913 15:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.913 15:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.913 15:54:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.913 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.913 { 00:17:23.913 "cntlid": 99, 00:17:23.913 "qid": 0, 00:17:23.913 "state": "enabled", 00:17:23.913 "thread": "nvmf_tgt_poll_group_000", 00:17:23.913 "listen_address": { 00:17:23.913 "trtype": "TCP", 00:17:23.913 "adrfam": "IPv4", 00:17:23.913 "traddr": "10.0.0.2", 00:17:23.913 "trsvcid": "4420" 00:17:23.913 }, 00:17:23.913 "peer_address": { 00:17:23.913 "trtype": "TCP", 00:17:23.913 "adrfam": "IPv4", 00:17:23.913 "traddr": "10.0.0.1", 00:17:23.913 "trsvcid": "37624" 00:17:23.913 }, 00:17:23.913 "auth": { 00:17:23.913 "state": "completed", 00:17:23.913 "digest": "sha512", 00:17:23.913 "dhgroup": "null" 00:17:23.913 } 00:17:23.913 } 00:17:23.913 ]' 00:17:23.913 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.913 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.914 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.171 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:24.171 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.171 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.171 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.171 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.428 15:54:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.361 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:25.925 00:17:25.925 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.925 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.925 15:54:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.925 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.925 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.925 15:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.925 15:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.925 15:54:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.925 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.925 { 00:17:25.925 "cntlid": 101, 00:17:25.925 "qid": 0, 00:17:25.925 "state": "enabled", 00:17:25.925 "thread": "nvmf_tgt_poll_group_000", 00:17:25.925 "listen_address": { 00:17:25.925 "trtype": "TCP", 00:17:25.925 "adrfam": "IPv4", 00:17:25.925 "traddr": "10.0.0.2", 00:17:25.925 "trsvcid": "4420" 00:17:25.925 }, 00:17:25.925 "peer_address": { 00:17:25.925 "trtype": "TCP", 00:17:25.925 "adrfam": "IPv4", 00:17:25.925 "traddr": "10.0.0.1", 00:17:25.925 "trsvcid": "37646" 00:17:25.925 }, 00:17:25.925 "auth": { 00:17:25.925 "state": "completed", 00:17:25.925 "digest": "sha512", 00:17:25.925 "dhgroup": "null" 00:17:25.925 } 00:17:25.925 } 00:17:25.925 ]' 00:17:25.925 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.182 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.182 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.182 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:26.182 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.182 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.182 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.182 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.440 15:54:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.404 15:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.662 15:54:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.662 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.662 15:54:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.919 00:17:27.919 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.919 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.919 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.177 { 00:17:28.177 "cntlid": 103, 00:17:28.177 "qid": 0, 00:17:28.177 "state": "enabled", 00:17:28.177 "thread": "nvmf_tgt_poll_group_000", 00:17:28.177 "listen_address": { 00:17:28.177 "trtype": "TCP", 00:17:28.177 "adrfam": "IPv4", 00:17:28.177 "traddr": "10.0.0.2", 00:17:28.177 "trsvcid": "4420" 00:17:28.177 }, 00:17:28.177 "peer_address": { 00:17:28.177 "trtype": "TCP", 00:17:28.177 "adrfam": "IPv4", 00:17:28.177 "traddr": "10.0.0.1", 00:17:28.177 "trsvcid": "37666" 00:17:28.177 }, 00:17:28.177 "auth": { 00:17:28.177 "state": "completed", 00:17:28.177 "digest": "sha512", 00:17:28.177 "dhgroup": "null" 00:17:28.177 } 00:17:28.177 } 00:17:28.177 ]' 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.177 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.435 15:54:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:17:29.367 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.367 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:29.367 15:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.367 15:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.367 15:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.367 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.367 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.367 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.367 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.624 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:29.624 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.624 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:29.624 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:29.624 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:29.624 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.624 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.624 15:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.624 15:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.624 15:54:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.624 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.624 15:54:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.189 00:17:30.189 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.189 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.189 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.446 { 00:17:30.446 "cntlid": 105, 00:17:30.446 "qid": 0, 00:17:30.446 "state": "enabled", 00:17:30.446 "thread": "nvmf_tgt_poll_group_000", 00:17:30.446 "listen_address": { 00:17:30.446 "trtype": "TCP", 00:17:30.446 "adrfam": "IPv4", 00:17:30.446 "traddr": "10.0.0.2", 00:17:30.446 "trsvcid": "4420" 00:17:30.446 }, 00:17:30.446 "peer_address": { 00:17:30.446 "trtype": "TCP", 00:17:30.446 "adrfam": "IPv4", 00:17:30.446 "traddr": "10.0.0.1", 00:17:30.446 "trsvcid": "37688" 00:17:30.446 }, 00:17:30.446 "auth": { 00:17:30.446 "state": "completed", 00:17:30.446 "digest": "sha512", 00:17:30.446 "dhgroup": "ffdhe2048" 00:17:30.446 } 00:17:30.446 } 00:17:30.446 ]' 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.446 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.704 15:54:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:17:31.634 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.634 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:31.635 15:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.635 15:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.635 15:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.635 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.635 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.635 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:31.891 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:31.891 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.891 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:31.891 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:31.891 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:31.891 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.891 15:54:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.891 15:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.891 15:54:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.891 15:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.891 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.891 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.148 00:17:32.148 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.148 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.148 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.405 { 00:17:32.405 "cntlid": 107, 00:17:32.405 "qid": 0, 00:17:32.405 "state": "enabled", 00:17:32.405 "thread": "nvmf_tgt_poll_group_000", 00:17:32.405 "listen_address": { 00:17:32.405 "trtype": "TCP", 00:17:32.405 "adrfam": "IPv4", 00:17:32.405 "traddr": "10.0.0.2", 00:17:32.405 "trsvcid": "4420" 00:17:32.405 }, 00:17:32.405 "peer_address": { 00:17:32.405 "trtype": "TCP", 00:17:32.405 "adrfam": "IPv4", 00:17:32.405 "traddr": "10.0.0.1", 00:17:32.405 "trsvcid": "37716" 00:17:32.405 }, 00:17:32.405 "auth": { 00:17:32.405 "state": "completed", 00:17:32.405 "digest": "sha512", 00:17:32.405 "dhgroup": "ffdhe2048" 00:17:32.405 } 00:17:32.405 } 00:17:32.405 ]' 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.405 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.662 15:54:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:17:33.594 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.594 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:33.594 15:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.594 15:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.594 15:54:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.594 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.594 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.594 15:54:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.852 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:33.852 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.852 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:33.852 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:33.852 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:33.852 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.852 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.852 15:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.852 15:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.852 15:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.852 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.852 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.110 00:17:34.367 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.367 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.367 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.624 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.624 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.625 15:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.625 15:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.625 15:54:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.625 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.625 { 00:17:34.625 "cntlid": 109, 00:17:34.625 "qid": 0, 00:17:34.625 "state": "enabled", 00:17:34.625 "thread": "nvmf_tgt_poll_group_000", 00:17:34.625 "listen_address": { 00:17:34.625 "trtype": "TCP", 00:17:34.625 "adrfam": "IPv4", 00:17:34.625 "traddr": "10.0.0.2", 00:17:34.625 "trsvcid": "4420" 00:17:34.625 }, 00:17:34.625 "peer_address": { 00:17:34.625 "trtype": "TCP", 00:17:34.625 "adrfam": "IPv4", 00:17:34.625 "traddr": "10.0.0.1", 00:17:34.625 "trsvcid": "48422" 00:17:34.625 }, 00:17:34.625 "auth": { 00:17:34.625 "state": "completed", 00:17:34.625 "digest": "sha512", 00:17:34.625 "dhgroup": "ffdhe2048" 00:17:34.625 } 00:17:34.625 } 00:17:34.625 ]' 00:17:34.625 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.625 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.625 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.625 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.625 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.625 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.625 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.625 15:54:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.882 15:54:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:17:35.827 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.827 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:35.827 15:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.827 15:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.827 15:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.827 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.827 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:35.827 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:36.086 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:36.086 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.086 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:36.086 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:36.086 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:36.086 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.086 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:36.086 15:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.086 15:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.086 15:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.086 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.086 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.650 00:17:36.650 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.650 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.650 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.650 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.650 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.650 15:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.650 15:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.650 15:54:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.650 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.650 { 00:17:36.650 "cntlid": 111, 00:17:36.650 "qid": 0, 00:17:36.650 "state": "enabled", 00:17:36.650 "thread": "nvmf_tgt_poll_group_000", 00:17:36.650 "listen_address": { 00:17:36.650 "trtype": "TCP", 00:17:36.650 "adrfam": "IPv4", 00:17:36.650 "traddr": "10.0.0.2", 00:17:36.650 "trsvcid": "4420" 00:17:36.650 }, 00:17:36.650 "peer_address": { 00:17:36.650 "trtype": "TCP", 00:17:36.650 "adrfam": "IPv4", 00:17:36.650 "traddr": "10.0.0.1", 00:17:36.650 "trsvcid": "48462" 00:17:36.650 }, 00:17:36.650 "auth": { 00:17:36.650 "state": "completed", 00:17:36.650 "digest": "sha512", 00:17:36.650 "dhgroup": "ffdhe2048" 00:17:36.650 } 00:17:36.650 } 00:17:36.650 ]' 00:17:36.650 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.907 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.907 15:54:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.907 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:36.907 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.907 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.907 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.908 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.165 15:54:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:17:38.097 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.097 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:38.097 15:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.097 15:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.097 15:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.097 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.097 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.097 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.098 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.355 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:38.355 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.355 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:38.355 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:38.355 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:38.355 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.355 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.355 15:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.355 15:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.355 15:54:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.355 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.355 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.613 00:17:38.613 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.613 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.613 15:54:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.871 { 00:17:38.871 "cntlid": 113, 00:17:38.871 "qid": 0, 00:17:38.871 "state": "enabled", 00:17:38.871 "thread": "nvmf_tgt_poll_group_000", 00:17:38.871 "listen_address": { 00:17:38.871 "trtype": "TCP", 00:17:38.871 "adrfam": "IPv4", 00:17:38.871 "traddr": "10.0.0.2", 00:17:38.871 "trsvcid": "4420" 00:17:38.871 }, 00:17:38.871 "peer_address": { 00:17:38.871 "trtype": "TCP", 00:17:38.871 "adrfam": "IPv4", 00:17:38.871 "traddr": "10.0.0.1", 00:17:38.871 "trsvcid": "48488" 00:17:38.871 }, 00:17:38.871 "auth": { 00:17:38.871 "state": "completed", 00:17:38.871 "digest": "sha512", 00:17:38.871 "dhgroup": "ffdhe3072" 00:17:38.871 } 00:17:38.871 } 00:17:38.871 ]' 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.871 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.129 15:54:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:17:40.061 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.061 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:40.061 15:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.061 15:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.061 15:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.061 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.061 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.061 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.626 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:40.626 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.626 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.626 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:40.626 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:40.626 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.626 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.626 15:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.626 15:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.626 15:54:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.626 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.626 15:54:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.884 00:17:40.884 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.884 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.884 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.141 { 00:17:41.141 "cntlid": 115, 00:17:41.141 "qid": 0, 00:17:41.141 "state": "enabled", 00:17:41.141 "thread": "nvmf_tgt_poll_group_000", 00:17:41.141 "listen_address": { 00:17:41.141 "trtype": "TCP", 00:17:41.141 "adrfam": "IPv4", 00:17:41.141 "traddr": "10.0.0.2", 00:17:41.141 "trsvcid": "4420" 00:17:41.141 }, 00:17:41.141 "peer_address": { 00:17:41.141 "trtype": "TCP", 00:17:41.141 "adrfam": "IPv4", 00:17:41.141 "traddr": "10.0.0.1", 00:17:41.141 "trsvcid": "48508" 00:17:41.141 }, 00:17:41.141 "auth": { 00:17:41.141 "state": "completed", 00:17:41.141 "digest": "sha512", 00:17:41.141 "dhgroup": "ffdhe3072" 00:17:41.141 } 00:17:41.141 } 00:17:41.141 ]' 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.141 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.398 15:54:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:17:42.331 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.331 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:42.331 15:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.331 15:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.331 15:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.331 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.331 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:42.331 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:42.589 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:42.589 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.589 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:42.589 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:42.589 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:42.589 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.589 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.589 15:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.589 15:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.589 15:54:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.589 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.589 15:54:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.846 00:17:43.104 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.104 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.104 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.361 { 00:17:43.361 "cntlid": 117, 00:17:43.361 "qid": 0, 00:17:43.361 "state": "enabled", 00:17:43.361 "thread": "nvmf_tgt_poll_group_000", 00:17:43.361 "listen_address": { 00:17:43.361 "trtype": "TCP", 00:17:43.361 "adrfam": "IPv4", 00:17:43.361 "traddr": "10.0.0.2", 00:17:43.361 "trsvcid": "4420" 00:17:43.361 }, 00:17:43.361 "peer_address": { 00:17:43.361 "trtype": "TCP", 00:17:43.361 "adrfam": "IPv4", 00:17:43.361 "traddr": "10.0.0.1", 00:17:43.361 "trsvcid": "48536" 00:17:43.361 }, 00:17:43.361 "auth": { 00:17:43.361 "state": "completed", 00:17:43.361 "digest": "sha512", 00:17:43.361 "dhgroup": "ffdhe3072" 00:17:43.361 } 00:17:43.361 } 00:17:43.361 ]' 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.361 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.619 15:54:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:17:44.550 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.550 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:44.550 15:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.550 15:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.550 15:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.550 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.550 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:44.550 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:44.808 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:44.808 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.808 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:44.808 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:44.808 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:44.808 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.808 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:44.808 15:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.808 15:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.808 15:54:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.808 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.808 15:54:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.066 00:17:45.066 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.066 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.066 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.323 { 00:17:45.323 "cntlid": 119, 00:17:45.323 "qid": 0, 00:17:45.323 "state": "enabled", 00:17:45.323 "thread": "nvmf_tgt_poll_group_000", 00:17:45.323 "listen_address": { 00:17:45.323 "trtype": "TCP", 00:17:45.323 "adrfam": "IPv4", 00:17:45.323 "traddr": "10.0.0.2", 00:17:45.323 "trsvcid": "4420" 00:17:45.323 }, 00:17:45.323 "peer_address": { 00:17:45.323 "trtype": "TCP", 00:17:45.323 "adrfam": "IPv4", 00:17:45.323 "traddr": "10.0.0.1", 00:17:45.323 "trsvcid": "52226" 00:17:45.323 }, 00:17:45.323 "auth": { 00:17:45.323 "state": "completed", 00:17:45.323 "digest": "sha512", 00:17:45.323 "dhgroup": "ffdhe3072" 00:17:45.323 } 00:17:45.323 } 00:17:45.323 ]' 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.323 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.581 15:54:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:17:46.513 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.513 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:46.513 15:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.513 15:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.513 15:54:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.513 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.513 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.513 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.513 15:54:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.770 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:46.770 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.770 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.770 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:46.770 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.770 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.770 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.770 15:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.770 15:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.770 15:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.770 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.770 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.335 00:17:47.335 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:47.335 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:47.335 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:47.592 { 00:17:47.592 "cntlid": 121, 00:17:47.592 "qid": 0, 00:17:47.592 "state": "enabled", 00:17:47.592 "thread": "nvmf_tgt_poll_group_000", 00:17:47.592 "listen_address": { 00:17:47.592 "trtype": "TCP", 00:17:47.592 "adrfam": "IPv4", 00:17:47.592 "traddr": "10.0.0.2", 00:17:47.592 "trsvcid": "4420" 00:17:47.592 }, 00:17:47.592 "peer_address": { 00:17:47.592 "trtype": "TCP", 00:17:47.592 "adrfam": "IPv4", 00:17:47.592 "traddr": "10.0.0.1", 00:17:47.592 "trsvcid": "52254" 00:17:47.592 }, 00:17:47.592 "auth": { 00:17:47.592 "state": "completed", 00:17:47.592 "digest": "sha512", 00:17:47.592 "dhgroup": "ffdhe4096" 00:17:47.592 } 00:17:47.592 } 00:17:47.592 ]' 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.592 15:54:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.849 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:17:48.780 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.780 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:48.780 15:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.780 15:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.780 15:54:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.780 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:48.780 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:48.780 15:54:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:49.037 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:49.037 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.037 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.037 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:49.037 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.037 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.038 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.038 15:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.038 15:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.038 15:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.038 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.038 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:49.603 00:17:49.603 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.603 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.603 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.603 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.603 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.603 15:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.603 15:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.603 15:54:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.603 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.603 { 00:17:49.603 "cntlid": 123, 00:17:49.603 "qid": 0, 00:17:49.603 "state": "enabled", 00:17:49.603 "thread": "nvmf_tgt_poll_group_000", 00:17:49.603 "listen_address": { 00:17:49.603 "trtype": "TCP", 00:17:49.603 "adrfam": "IPv4", 00:17:49.603 "traddr": "10.0.0.2", 00:17:49.603 "trsvcid": "4420" 00:17:49.603 }, 00:17:49.603 "peer_address": { 00:17:49.603 "trtype": "TCP", 00:17:49.603 "adrfam": "IPv4", 00:17:49.603 "traddr": "10.0.0.1", 00:17:49.603 "trsvcid": "52280" 00:17:49.603 }, 00:17:49.603 "auth": { 00:17:49.603 "state": "completed", 00:17:49.603 "digest": "sha512", 00:17:49.603 "dhgroup": "ffdhe4096" 00:17:49.603 } 00:17:49.603 } 00:17:49.603 ]' 00:17:49.603 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.888 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.888 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.888 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:49.888 15:54:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.888 15:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.888 15:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.888 15:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.168 15:54:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:17:51.100 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.100 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:51.100 15:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.100 15:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.100 15:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.100 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.100 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.100 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:51.358 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:51.358 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.358 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.358 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:51.358 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:51.358 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.358 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.358 15:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.358 15:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.358 15:54:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.358 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.358 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.615 00:17:51.615 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.615 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.615 15:54:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.873 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.873 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.873 15:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.873 15:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.873 15:54:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.873 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.873 { 00:17:51.873 "cntlid": 125, 00:17:51.873 "qid": 0, 00:17:51.873 "state": "enabled", 00:17:51.873 "thread": "nvmf_tgt_poll_group_000", 00:17:51.873 "listen_address": { 00:17:51.873 "trtype": "TCP", 00:17:51.873 "adrfam": "IPv4", 00:17:51.873 "traddr": "10.0.0.2", 00:17:51.873 "trsvcid": "4420" 00:17:51.873 }, 00:17:51.873 "peer_address": { 00:17:51.873 "trtype": "TCP", 00:17:51.873 "adrfam": "IPv4", 00:17:51.873 "traddr": "10.0.0.1", 00:17:51.873 "trsvcid": "52308" 00:17:51.873 }, 00:17:51.873 "auth": { 00:17:51.873 "state": "completed", 00:17:51.873 "digest": "sha512", 00:17:51.873 "dhgroup": "ffdhe4096" 00:17:51.873 } 00:17:51.873 } 00:17:51.873 ]' 00:17:51.873 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.873 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.873 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.130 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:52.130 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.130 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.130 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.130 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.388 15:54:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:17:53.319 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.319 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:53.319 15:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.319 15:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.319 15:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.319 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.319 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.319 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:53.576 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:53.576 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.576 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.576 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:53.576 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:53.576 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.576 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:53.576 15:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.576 15:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.576 15:54:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.576 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.576 15:54:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.833 00:17:53.833 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.833 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.833 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.090 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.090 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.090 15:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.090 15:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.090 15:54:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.090 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.090 { 00:17:54.090 "cntlid": 127, 00:17:54.090 "qid": 0, 00:17:54.090 "state": "enabled", 00:17:54.090 "thread": "nvmf_tgt_poll_group_000", 00:17:54.090 "listen_address": { 00:17:54.090 "trtype": "TCP", 00:17:54.090 "adrfam": "IPv4", 00:17:54.090 "traddr": "10.0.0.2", 00:17:54.090 "trsvcid": "4420" 00:17:54.090 }, 00:17:54.090 "peer_address": { 00:17:54.090 "trtype": "TCP", 00:17:54.090 "adrfam": "IPv4", 00:17:54.090 "traddr": "10.0.0.1", 00:17:54.090 "trsvcid": "53588" 00:17:54.090 }, 00:17:54.090 "auth": { 00:17:54.090 "state": "completed", 00:17:54.090 "digest": "sha512", 00:17:54.090 "dhgroup": "ffdhe4096" 00:17:54.090 } 00:17:54.090 } 00:17:54.090 ]' 00:17:54.090 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.090 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.090 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.090 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:54.090 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.348 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.348 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.348 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.348 15:54:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:17:55.279 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.279 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:55.279 15:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.279 15:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.279 15:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.279 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.279 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.279 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.279 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:55.537 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:55.537 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.537 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:55.537 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:55.537 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.537 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.537 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.537 15:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.537 15:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.537 15:54:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.537 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.537 15:54:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.101 00:17:56.101 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.101 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.101 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.359 { 00:17:56.359 "cntlid": 129, 00:17:56.359 "qid": 0, 00:17:56.359 "state": "enabled", 00:17:56.359 "thread": "nvmf_tgt_poll_group_000", 00:17:56.359 "listen_address": { 00:17:56.359 "trtype": "TCP", 00:17:56.359 "adrfam": "IPv4", 00:17:56.359 "traddr": "10.0.0.2", 00:17:56.359 "trsvcid": "4420" 00:17:56.359 }, 00:17:56.359 "peer_address": { 00:17:56.359 "trtype": "TCP", 00:17:56.359 "adrfam": "IPv4", 00:17:56.359 "traddr": "10.0.0.1", 00:17:56.359 "trsvcid": "53604" 00:17:56.359 }, 00:17:56.359 "auth": { 00:17:56.359 "state": "completed", 00:17:56.359 "digest": "sha512", 00:17:56.359 "dhgroup": "ffdhe6144" 00:17:56.359 } 00:17:56.359 } 00:17:56.359 ]' 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.359 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.617 15:54:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:17:57.547 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.547 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:57.547 15:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.547 15:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.547 15:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.547 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.547 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:57.547 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:57.804 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:57.804 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.804 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:57.804 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:57.804 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:57.804 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.804 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.804 15:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.804 15:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.804 15:54:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.804 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.804 15:54:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.368 00:17:58.368 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.368 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.368 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.626 { 00:17:58.626 "cntlid": 131, 00:17:58.626 "qid": 0, 00:17:58.626 "state": "enabled", 00:17:58.626 "thread": "nvmf_tgt_poll_group_000", 00:17:58.626 "listen_address": { 00:17:58.626 "trtype": "TCP", 00:17:58.626 "adrfam": "IPv4", 00:17:58.626 "traddr": "10.0.0.2", 00:17:58.626 "trsvcid": "4420" 00:17:58.626 }, 00:17:58.626 "peer_address": { 00:17:58.626 "trtype": "TCP", 00:17:58.626 "adrfam": "IPv4", 00:17:58.626 "traddr": "10.0.0.1", 00:17:58.626 "trsvcid": "53632" 00:17:58.626 }, 00:17:58.626 "auth": { 00:17:58.626 "state": "completed", 00:17:58.626 "digest": "sha512", 00:17:58.626 "dhgroup": "ffdhe6144" 00:17:58.626 } 00:17:58.626 } 00:17:58.626 ]' 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.626 15:54:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.883 15:54:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:17:59.812 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.812 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.812 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:59.812 15:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.812 15:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.812 15:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.812 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.812 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:59.812 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.068 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:00.069 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.069 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:00.069 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:00.069 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:00.069 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.069 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.069 15:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.069 15:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.069 15:54:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.069 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.069 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.631 00:18:00.631 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.631 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.631 15:54:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.889 { 00:18:00.889 "cntlid": 133, 00:18:00.889 "qid": 0, 00:18:00.889 "state": "enabled", 00:18:00.889 "thread": "nvmf_tgt_poll_group_000", 00:18:00.889 "listen_address": { 00:18:00.889 "trtype": "TCP", 00:18:00.889 "adrfam": "IPv4", 00:18:00.889 "traddr": "10.0.0.2", 00:18:00.889 "trsvcid": "4420" 00:18:00.889 }, 00:18:00.889 "peer_address": { 00:18:00.889 "trtype": "TCP", 00:18:00.889 "adrfam": "IPv4", 00:18:00.889 "traddr": "10.0.0.1", 00:18:00.889 "trsvcid": "53654" 00:18:00.889 }, 00:18:00.889 "auth": { 00:18:00.889 "state": "completed", 00:18:00.889 "digest": "sha512", 00:18:00.889 "dhgroup": "ffdhe6144" 00:18:00.889 } 00:18:00.889 } 00:18:00.889 ]' 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.889 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.146 15:54:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:18:02.077 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.077 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:02.077 15:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.078 15:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.078 15:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.078 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.078 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.078 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.335 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:02.335 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.335 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.335 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:02.335 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:02.335 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.335 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:02.335 15:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.335 15:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.335 15:54:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.335 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.335 15:54:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.899 00:18:02.899 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.899 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.899 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.157 { 00:18:03.157 "cntlid": 135, 00:18:03.157 "qid": 0, 00:18:03.157 "state": "enabled", 00:18:03.157 "thread": "nvmf_tgt_poll_group_000", 00:18:03.157 "listen_address": { 00:18:03.157 "trtype": "TCP", 00:18:03.157 "adrfam": "IPv4", 00:18:03.157 "traddr": "10.0.0.2", 00:18:03.157 "trsvcid": "4420" 00:18:03.157 }, 00:18:03.157 "peer_address": { 00:18:03.157 "trtype": "TCP", 00:18:03.157 "adrfam": "IPv4", 00:18:03.157 "traddr": "10.0.0.1", 00:18:03.157 "trsvcid": "53672" 00:18:03.157 }, 00:18:03.157 "auth": { 00:18:03.157 "state": "completed", 00:18:03.157 "digest": "sha512", 00:18:03.157 "dhgroup": "ffdhe6144" 00:18:03.157 } 00:18:03.157 } 00:18:03.157 ]' 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.157 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.415 15:55:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:18:04.349 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.349 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:04.349 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.349 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.349 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.349 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:04.349 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.349 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.349 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:04.606 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:04.606 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.606 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.606 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:04.606 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:04.606 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.606 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.606 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.606 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.606 15:55:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.606 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.606 15:55:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.538 00:18:05.538 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.538 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.538 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.796 { 00:18:05.796 "cntlid": 137, 00:18:05.796 "qid": 0, 00:18:05.796 "state": "enabled", 00:18:05.796 "thread": "nvmf_tgt_poll_group_000", 00:18:05.796 "listen_address": { 00:18:05.796 "trtype": "TCP", 00:18:05.796 "adrfam": "IPv4", 00:18:05.796 "traddr": "10.0.0.2", 00:18:05.796 "trsvcid": "4420" 00:18:05.796 }, 00:18:05.796 "peer_address": { 00:18:05.796 "trtype": "TCP", 00:18:05.796 "adrfam": "IPv4", 00:18:05.796 "traddr": "10.0.0.1", 00:18:05.796 "trsvcid": "33012" 00:18:05.796 }, 00:18:05.796 "auth": { 00:18:05.796 "state": "completed", 00:18:05.796 "digest": "sha512", 00:18:05.796 "dhgroup": "ffdhe8192" 00:18:05.796 } 00:18:05.796 } 00:18:05.796 ]' 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.796 15:55:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.053 15:55:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:18:06.985 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.985 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:06.985 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.985 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.985 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.985 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.985 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:06.985 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.242 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:07.242 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.243 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.243 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:07.243 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.243 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.243 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.243 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.243 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.243 15:55:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.243 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.243 15:55:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.172 00:18:08.172 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.172 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.172 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.172 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.172 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.172 15:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.172 15:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.172 15:55:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.172 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.172 { 00:18:08.172 "cntlid": 139, 00:18:08.172 "qid": 0, 00:18:08.172 "state": "enabled", 00:18:08.172 "thread": "nvmf_tgt_poll_group_000", 00:18:08.172 "listen_address": { 00:18:08.172 "trtype": "TCP", 00:18:08.172 "adrfam": "IPv4", 00:18:08.172 "traddr": "10.0.0.2", 00:18:08.172 "trsvcid": "4420" 00:18:08.172 }, 00:18:08.172 "peer_address": { 00:18:08.172 "trtype": "TCP", 00:18:08.172 "adrfam": "IPv4", 00:18:08.172 "traddr": "10.0.0.1", 00:18:08.172 "trsvcid": "33026" 00:18:08.172 }, 00:18:08.172 "auth": { 00:18:08.172 "state": "completed", 00:18:08.172 "digest": "sha512", 00:18:08.172 "dhgroup": "ffdhe8192" 00:18:08.172 } 00:18:08.172 } 00:18:08.172 ]' 00:18:08.172 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.429 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.429 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.429 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.429 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.429 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.429 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.430 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.687 15:55:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:N2MxMDk0NmJiMjJhZmRjNWRmMWMyM2M0MjcxNjBkMTm2goLT: --dhchap-ctrl-secret DHHC-1:02:MGExNDczOTc2ZDIyOGI1Mzk2MjFkYzNhNjNmMjRkODY5Y2NmYmUxNGZiZjg3Yjg5prPXtw==: 00:18:09.619 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.619 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:09.619 15:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.619 15:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.619 15:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.619 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.619 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.619 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.877 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:09.877 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.877 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.877 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:09.877 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.877 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.877 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.877 15:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.877 15:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.877 15:55:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.877 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.877 15:55:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.809 00:18:10.809 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.809 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.809 15:55:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.809 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.809 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.809 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.809 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.809 15:55:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.809 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.809 { 00:18:10.809 "cntlid": 141, 00:18:10.809 "qid": 0, 00:18:10.809 "state": "enabled", 00:18:10.809 "thread": "nvmf_tgt_poll_group_000", 00:18:10.809 "listen_address": { 00:18:10.809 "trtype": "TCP", 00:18:10.809 "adrfam": "IPv4", 00:18:10.809 "traddr": "10.0.0.2", 00:18:10.809 "trsvcid": "4420" 00:18:10.809 }, 00:18:10.809 "peer_address": { 00:18:10.809 "trtype": "TCP", 00:18:10.809 "adrfam": "IPv4", 00:18:10.809 "traddr": "10.0.0.1", 00:18:10.809 "trsvcid": "33048" 00:18:10.809 }, 00:18:10.809 "auth": { 00:18:10.809 "state": "completed", 00:18:10.809 "digest": "sha512", 00:18:10.809 "dhgroup": "ffdhe8192" 00:18:10.809 } 00:18:10.809 } 00:18:10.809 ]' 00:18:10.809 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.809 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.809 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.067 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.067 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.067 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.067 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.067 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.324 15:55:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:ZDExOGU5OTljNzA3M2I1Yzg4NGRmNDkyZDE5MTkwMWExMGE5YTFiM2EyNGNjZDgy7uUZJA==: --dhchap-ctrl-secret DHHC-1:01:OGI2YzE2OGNiOTcyMmViODE4MDAxMGExNTFlZjY0NWVsxKvg: 00:18:12.258 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.258 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:12.258 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.258 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.258 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.258 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.258 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.258 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.516 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:12.516 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.516 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.516 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:12.516 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.516 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.516 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:12.516 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.516 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.516 15:55:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.516 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.516 15:55:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:13.498 00:18:13.498 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.498 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.498 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.756 { 00:18:13.756 "cntlid": 143, 00:18:13.756 "qid": 0, 00:18:13.756 "state": "enabled", 00:18:13.756 "thread": "nvmf_tgt_poll_group_000", 00:18:13.756 "listen_address": { 00:18:13.756 "trtype": "TCP", 00:18:13.756 "adrfam": "IPv4", 00:18:13.756 "traddr": "10.0.0.2", 00:18:13.756 "trsvcid": "4420" 00:18:13.756 }, 00:18:13.756 "peer_address": { 00:18:13.756 "trtype": "TCP", 00:18:13.756 "adrfam": "IPv4", 00:18:13.756 "traddr": "10.0.0.1", 00:18:13.756 "trsvcid": "33070" 00:18:13.756 }, 00:18:13.756 "auth": { 00:18:13.756 "state": "completed", 00:18:13.756 "digest": "sha512", 00:18:13.756 "dhgroup": "ffdhe8192" 00:18:13.756 } 00:18:13.756 } 00:18:13.756 ]' 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.756 15:55:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.014 15:55:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:18:14.947 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.947 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:14.947 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.947 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.947 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.947 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:14.947 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:14.947 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:14.947 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:14.947 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:14.947 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:15.205 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:15.205 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.205 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.205 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:15.206 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:15.206 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.206 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.206 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.206 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.206 15:55:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.206 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.206 15:55:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.137 00:18:16.137 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.137 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.137 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.137 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.137 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.137 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.137 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.137 15:55:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.137 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.137 { 00:18:16.137 "cntlid": 145, 00:18:16.137 "qid": 0, 00:18:16.137 "state": "enabled", 00:18:16.137 "thread": "nvmf_tgt_poll_group_000", 00:18:16.137 "listen_address": { 00:18:16.137 "trtype": "TCP", 00:18:16.137 "adrfam": "IPv4", 00:18:16.137 "traddr": "10.0.0.2", 00:18:16.137 "trsvcid": "4420" 00:18:16.137 }, 00:18:16.137 "peer_address": { 00:18:16.137 "trtype": "TCP", 00:18:16.137 "adrfam": "IPv4", 00:18:16.137 "traddr": "10.0.0.1", 00:18:16.137 "trsvcid": "50288" 00:18:16.137 }, 00:18:16.137 "auth": { 00:18:16.137 "state": "completed", 00:18:16.137 "digest": "sha512", 00:18:16.137 "dhgroup": "ffdhe8192" 00:18:16.137 } 00:18:16.137 } 00:18:16.137 ]' 00:18:16.137 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.137 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.137 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.394 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.395 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.395 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.395 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.395 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.651 15:55:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:ODI0OTk3NzlmNmNjNzRmODVhZTkzNWVhMjliOTVkZDg4MjIxMDAxNTAyMGI5MDE4Y4q4yA==: --dhchap-ctrl-secret DHHC-1:03:YzJmMzc1ODEwMGQwZGFhMzUyN2Y1OGMyZTllZmI3ODM1NmE4Yzk2Zjg5MzYxNzI2YTYxNmFlNzc0NzI1YzdkNN7P3fc=: 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:17.582 15:55:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:17.583 15:55:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:18.146 request: 00:18:18.146 { 00:18:18.146 "name": "nvme0", 00:18:18.146 "trtype": "tcp", 00:18:18.146 "traddr": "10.0.0.2", 00:18:18.146 "adrfam": "ipv4", 00:18:18.146 "trsvcid": "4420", 00:18:18.146 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:18.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:18.146 "prchk_reftag": false, 00:18:18.146 "prchk_guard": false, 00:18:18.146 "hdgst": false, 00:18:18.146 "ddgst": false, 00:18:18.146 "dhchap_key": "key2", 00:18:18.146 "method": "bdev_nvme_attach_controller", 00:18:18.146 "req_id": 1 00:18:18.146 } 00:18:18.146 Got JSON-RPC error response 00:18:18.146 response: 00:18:18.146 { 00:18:18.146 "code": -5, 00:18:18.146 "message": "Input/output error" 00:18:18.146 } 00:18:18.146 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:18.146 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:18.147 15:55:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.078 request: 00:18:19.078 { 00:18:19.078 "name": "nvme0", 00:18:19.078 "trtype": "tcp", 00:18:19.078 "traddr": "10.0.0.2", 00:18:19.078 "adrfam": "ipv4", 00:18:19.078 "trsvcid": "4420", 00:18:19.078 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:19.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:19.078 "prchk_reftag": false, 00:18:19.078 "prchk_guard": false, 00:18:19.078 "hdgst": false, 00:18:19.078 "ddgst": false, 00:18:19.078 "dhchap_key": "key1", 00:18:19.078 "dhchap_ctrlr_key": "ckey2", 00:18:19.078 "method": "bdev_nvme_attach_controller", 00:18:19.078 "req_id": 1 00:18:19.078 } 00:18:19.078 Got JSON-RPC error response 00:18:19.078 response: 00:18:19.078 { 00:18:19.078 "code": -5, 00:18:19.078 "message": "Input/output error" 00:18:19.078 } 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.078 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.009 request: 00:18:20.010 { 00:18:20.010 "name": "nvme0", 00:18:20.010 "trtype": "tcp", 00:18:20.010 "traddr": "10.0.0.2", 00:18:20.010 "adrfam": "ipv4", 00:18:20.010 "trsvcid": "4420", 00:18:20.010 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:20.010 "prchk_reftag": false, 00:18:20.010 "prchk_guard": false, 00:18:20.010 "hdgst": false, 00:18:20.010 "ddgst": false, 00:18:20.010 "dhchap_key": "key1", 00:18:20.010 "dhchap_ctrlr_key": "ckey1", 00:18:20.010 "method": "bdev_nvme_attach_controller", 00:18:20.010 "req_id": 1 00:18:20.010 } 00:18:20.010 Got JSON-RPC error response 00:18:20.010 response: 00:18:20.010 { 00:18:20.010 "code": -5, 00:18:20.010 "message": "Input/output error" 00:18:20.010 } 00:18:20.010 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:20.010 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:20.010 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:20.010 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:20.010 15:55:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:20.010 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.010 15:55:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 749506 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 749506 ']' 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 749506 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 749506 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 749506' 00:18:20.010 killing process with pid 749506 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 749506 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 749506 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=771390 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 771390 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 771390 ']' 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.010 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.268 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.268 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:20.268 15:55:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.268 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.268 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.268 15:55:17 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.268 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:20.268 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 771390 00:18:20.268 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 771390 ']' 00:18:20.268 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.268 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.525 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.525 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.525 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.525 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.525 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:20.525 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:20.525 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.525 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.781 15:55:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.711 00:18:21.711 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.711 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.711 15:55:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.711 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.711 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.711 15:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.711 15:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.968 15:55:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.968 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.968 { 00:18:21.968 "cntlid": 1, 00:18:21.968 "qid": 0, 00:18:21.968 "state": "enabled", 00:18:21.968 "thread": "nvmf_tgt_poll_group_000", 00:18:21.968 "listen_address": { 00:18:21.968 "trtype": "TCP", 00:18:21.968 "adrfam": "IPv4", 00:18:21.968 "traddr": "10.0.0.2", 00:18:21.968 "trsvcid": "4420" 00:18:21.968 }, 00:18:21.968 "peer_address": { 00:18:21.968 "trtype": "TCP", 00:18:21.968 "adrfam": "IPv4", 00:18:21.968 "traddr": "10.0.0.1", 00:18:21.968 "trsvcid": "50350" 00:18:21.968 }, 00:18:21.968 "auth": { 00:18:21.968 "state": "completed", 00:18:21.968 "digest": "sha512", 00:18:21.968 "dhgroup": "ffdhe8192" 00:18:21.968 } 00:18:21.968 } 00:18:21.968 ]' 00:18:21.968 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.968 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:21.968 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.968 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.968 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.968 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.968 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.968 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.225 15:55:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ZTA0Yzk1ZWQ2YzIxNTkyYzA5OTJlZTczOGFmZDdhNjZiM2I5OTE2NDZkY2E2NTEzNWMyYjc2Yzg0N2UwYTdjZCEHmN8=: 00:18:23.173 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.173 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:23.173 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.173 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.173 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.173 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:23.173 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.173 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.173 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.173 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:23.173 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:23.430 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.430 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:23.430 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.430 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:23.430 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.430 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:23.430 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.430 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.430 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.687 request: 00:18:23.687 { 00:18:23.687 "name": "nvme0", 00:18:23.687 "trtype": "tcp", 00:18:23.687 "traddr": "10.0.0.2", 00:18:23.687 "adrfam": "ipv4", 00:18:23.687 "trsvcid": "4420", 00:18:23.687 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:23.687 "prchk_reftag": false, 00:18:23.687 "prchk_guard": false, 00:18:23.687 "hdgst": false, 00:18:23.687 "ddgst": false, 00:18:23.687 "dhchap_key": "key3", 00:18:23.687 "method": "bdev_nvme_attach_controller", 00:18:23.687 "req_id": 1 00:18:23.687 } 00:18:23.687 Got JSON-RPC error response 00:18:23.687 response: 00:18:23.687 { 00:18:23.687 "code": -5, 00:18:23.687 "message": "Input/output error" 00:18:23.687 } 00:18:23.687 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:23.687 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:23.687 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:23.687 15:55:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:23.687 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:23.687 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:23.687 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:23.687 15:55:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:23.943 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.943 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:23.943 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.943 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:23.943 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.943 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:23.943 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.943 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.943 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.200 request: 00:18:24.200 { 00:18:24.200 "name": "nvme0", 00:18:24.200 "trtype": "tcp", 00:18:24.200 "traddr": "10.0.0.2", 00:18:24.200 "adrfam": "ipv4", 00:18:24.200 "trsvcid": "4420", 00:18:24.200 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:24.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:24.200 "prchk_reftag": false, 00:18:24.200 "prchk_guard": false, 00:18:24.200 "hdgst": false, 00:18:24.200 "ddgst": false, 00:18:24.200 "dhchap_key": "key3", 00:18:24.200 "method": "bdev_nvme_attach_controller", 00:18:24.200 "req_id": 1 00:18:24.200 } 00:18:24.200 Got JSON-RPC error response 00:18:24.200 response: 00:18:24.200 { 00:18:24.200 "code": -5, 00:18:24.200 "message": "Input/output error" 00:18:24.200 } 00:18:24.200 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:24.200 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:24.200 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:24.200 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:24.200 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:24.200 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:24.200 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:24.200 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.200 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.200 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.458 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.716 request: 00:18:24.716 { 00:18:24.716 "name": "nvme0", 00:18:24.716 "trtype": "tcp", 00:18:24.716 "traddr": "10.0.0.2", 00:18:24.716 "adrfam": "ipv4", 00:18:24.716 "trsvcid": "4420", 00:18:24.716 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:24.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:18:24.716 "prchk_reftag": false, 00:18:24.716 "prchk_guard": false, 00:18:24.716 "hdgst": false, 00:18:24.716 "ddgst": false, 00:18:24.716 "dhchap_key": "key0", 00:18:24.716 "dhchap_ctrlr_key": "key1", 00:18:24.716 "method": "bdev_nvme_attach_controller", 00:18:24.716 "req_id": 1 00:18:24.716 } 00:18:24.716 Got JSON-RPC error response 00:18:24.716 response: 00:18:24.716 { 00:18:24.716 "code": -5, 00:18:24.716 "message": "Input/output error" 00:18:24.716 } 00:18:24.716 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:24.716 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:24.716 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:24.716 15:55:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:24.716 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:24.716 15:55:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:25.281 00:18:25.281 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:25.281 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:25.281 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.281 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.281 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.281 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.539 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:25.539 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:25.539 15:55:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 749646 00:18:25.539 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 749646 ']' 00:18:25.539 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 749646 00:18:25.796 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:25.796 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.796 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 749646 00:18:25.796 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:25.796 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:25.796 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 749646' 00:18:25.796 killing process with pid 749646 00:18:25.796 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 749646 00:18:25.796 15:55:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 749646 00:18:26.054 15:55:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:26.054 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:26.054 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:26.054 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:26.054 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:26.054 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:26.054 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:26.054 rmmod nvme_tcp 00:18:26.054 rmmod nvme_fabrics 00:18:26.311 rmmod nvme_keyring 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 771390 ']' 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 771390 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 771390 ']' 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 771390 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 771390 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 771390' 00:18:26.311 killing process with pid 771390 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 771390 00:18:26.311 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 771390 00:18:26.570 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:26.570 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:26.570 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:26.570 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:26.570 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:26.570 15:55:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.570 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.570 15:55:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.475 15:55:25 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:28.475 15:55:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.U6F /tmp/spdk.key-sha256.UQI /tmp/spdk.key-sha384.esZ /tmp/spdk.key-sha512.Riw /tmp/spdk.key-sha512.210 /tmp/spdk.key-sha384.e3T /tmp/spdk.key-sha256.IYn '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:28.475 00:18:28.475 real 3m1.504s 00:18:28.475 user 7m4.469s 00:18:28.475 sys 0m25.155s 00:18:28.475 15:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:28.475 15:55:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.475 ************************************ 00:18:28.475 END TEST nvmf_auth_target 00:18:28.475 ************************************ 00:18:28.475 15:55:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:28.475 15:55:25 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:18:28.475 15:55:25 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:28.475 15:55:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:28.475 15:55:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:28.475 15:55:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:28.475 ************************************ 00:18:28.475 START TEST nvmf_bdevio_no_huge 00:18:28.475 ************************************ 00:18:28.475 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:28.733 * Looking for test storage... 00:18:28.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:28.733 15:55:25 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:30.638 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:30.638 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:30.638 Found net devices under 0000:84:00.0: cvl_0_0 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:30.638 Found net devices under 0000:84:00.1: cvl_0_1 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:30.638 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:30.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:18:30.896 00:18:30.896 --- 10.0.0.2 ping statistics --- 00:18:30.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.896 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:30.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:18:30.896 00:18:30.896 --- 10.0.0.1 ping statistics --- 00:18:30.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.896 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=774059 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 774059 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 774059 ']' 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:30.896 15:55:27 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:30.897 [2024-07-12 15:55:28.044588] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:18:30.897 [2024-07-12 15:55:28.044689] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:30.897 [2024-07-12 15:55:28.115053] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:31.154 [2024-07-12 15:55:28.215113] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.154 [2024-07-12 15:55:28.215166] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.154 [2024-07-12 15:55:28.215189] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.154 [2024-07-12 15:55:28.215200] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.154 [2024-07-12 15:55:28.215210] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.154 [2024-07-12 15:55:28.215362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:31.154 [2024-07-12 15:55:28.215430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:31.154 [2024-07-12 15:55:28.215496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:31.154 [2024-07-12 15:55:28.215498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:31.154 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.154 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:31.154 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:31.154 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.154 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.154 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.154 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:31.154 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.154 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.154 [2024-07-12 15:55:28.337098] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.154 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.154 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.155 Malloc0 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.155 [2024-07-12 15:55:28.375071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:31.155 { 00:18:31.155 "params": { 00:18:31.155 "name": "Nvme$subsystem", 00:18:31.155 "trtype": "$TEST_TRANSPORT", 00:18:31.155 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:31.155 "adrfam": "ipv4", 00:18:31.155 "trsvcid": "$NVMF_PORT", 00:18:31.155 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:31.155 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:31.155 "hdgst": ${hdgst:-false}, 00:18:31.155 "ddgst": ${ddgst:-false} 00:18:31.155 }, 00:18:31.155 "method": "bdev_nvme_attach_controller" 00:18:31.155 } 00:18:31.155 EOF 00:18:31.155 )") 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:31.155 15:55:28 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:31.155 "params": { 00:18:31.155 "name": "Nvme1", 00:18:31.155 "trtype": "tcp", 00:18:31.155 "traddr": "10.0.0.2", 00:18:31.155 "adrfam": "ipv4", 00:18:31.155 "trsvcid": "4420", 00:18:31.155 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.155 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.155 "hdgst": false, 00:18:31.155 "ddgst": false 00:18:31.155 }, 00:18:31.155 "method": "bdev_nvme_attach_controller" 00:18:31.155 }' 00:18:31.155 [2024-07-12 15:55:28.419555] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:18:31.155 [2024-07-12 15:55:28.419654] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid774204 ] 00:18:31.412 [2024-07-12 15:55:28.486640] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:31.412 [2024-07-12 15:55:28.599754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.412 [2024-07-12 15:55:28.599788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.413 [2024-07-12 15:55:28.599792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.670 I/O targets: 00:18:31.670 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:31.670 00:18:31.670 00:18:31.670 CUnit - A unit testing framework for C - Version 2.1-3 00:18:31.670 http://cunit.sourceforge.net/ 00:18:31.670 00:18:31.670 00:18:31.670 Suite: bdevio tests on: Nvme1n1 00:18:31.670 Test: blockdev write read block ...passed 00:18:31.670 Test: blockdev write zeroes read block ...passed 00:18:31.670 Test: blockdev write zeroes read no split ...passed 00:18:31.670 Test: blockdev write zeroes read split ...passed 00:18:31.670 Test: blockdev write zeroes read split partial ...passed 00:18:31.670 Test: blockdev reset ...[2024-07-12 15:55:28.927258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:31.670 [2024-07-12 15:55:28.927371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc66860 (9): Bad file descriptor 00:18:31.670 [2024-07-12 15:55:28.939443] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:31.670 passed 00:18:31.926 Test: blockdev write read 8 blocks ...passed 00:18:31.926 Test: blockdev write read size > 128k ...passed 00:18:31.926 Test: blockdev write read invalid size ...passed 00:18:31.926 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:31.926 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:31.926 Test: blockdev write read max offset ...passed 00:18:31.926 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:31.926 Test: blockdev writev readv 8 blocks ...passed 00:18:31.926 Test: blockdev writev readv 30 x 1block ...passed 00:18:31.926 Test: blockdev writev readv block ...passed 00:18:31.926 Test: blockdev writev readv size > 128k ...passed 00:18:31.926 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:31.926 Test: blockdev comparev and writev ...[2024-07-12 15:55:29.159154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.926 [2024-07-12 15:55:29.159189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.926 [2024-07-12 15:55:29.159212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.926 [2024-07-12 15:55:29.159230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.926 [2024-07-12 15:55:29.159664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.926 [2024-07-12 15:55:29.159691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:31.926 [2024-07-12 15:55:29.159713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.926 [2024-07-12 15:55:29.159730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:31.926 [2024-07-12 15:55:29.160182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.926 [2024-07-12 15:55:29.160208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:31.926 [2024-07-12 15:55:29.160230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.926 [2024-07-12 15:55:29.160247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:31.927 [2024-07-12 15:55:29.160684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.927 [2024-07-12 15:55:29.160709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:31.927 [2024-07-12 15:55:29.160731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.927 [2024-07-12 15:55:29.160756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:31.927 passed 00:18:32.184 Test: blockdev nvme passthru rw ...passed 00:18:32.184 Test: blockdev nvme passthru vendor specific ...[2024-07-12 15:55:29.245261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.184 [2024-07-12 15:55:29.245289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:32.184 [2024-07-12 15:55:29.245591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.184 [2024-07-12 15:55:29.245615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:32.184 [2024-07-12 15:55:29.245874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.184 [2024-07-12 15:55:29.245908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:32.184 [2024-07-12 15:55:29.246082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.184 [2024-07-12 15:55:29.246106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:32.184 passed 00:18:32.184 Test: blockdev nvme admin passthru ...passed 00:18:32.184 Test: blockdev copy ...passed 00:18:32.184 00:18:32.184 Run Summary: Type Total Ran Passed Failed Inactive 00:18:32.184 suites 1 1 n/a 0 0 00:18:32.184 tests 23 23 23 0 0 00:18:32.184 asserts 152 152 152 0 n/a 00:18:32.184 00:18:32.184 Elapsed time = 1.080 seconds 00:18:32.441 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.441 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.441 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:32.441 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.441 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:32.441 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:32.441 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:32.441 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:32.441 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:32.441 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:32.441 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:32.442 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:32.442 rmmod nvme_tcp 00:18:32.442 rmmod nvme_fabrics 00:18:32.442 rmmod nvme_keyring 00:18:32.442 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:32.442 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:32.442 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:32.442 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 774059 ']' 00:18:32.442 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 774059 00:18:32.442 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 774059 ']' 00:18:32.442 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 774059 00:18:32.442 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:32.442 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.442 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 774059 00:18:32.699 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:32.699 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:32.699 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 774059' 00:18:32.699 killing process with pid 774059 00:18:32.699 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 774059 00:18:32.699 15:55:29 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 774059 00:18:32.959 15:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:32.959 15:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:32.959 15:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:32.959 15:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:32.959 15:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:32.959 15:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.959 15:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:32.959 15:55:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.489 15:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:35.489 00:18:35.489 real 0m6.461s 00:18:35.489 user 0m10.094s 00:18:35.489 sys 0m2.466s 00:18:35.489 15:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:35.489 15:55:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.489 ************************************ 00:18:35.489 END TEST nvmf_bdevio_no_huge 00:18:35.489 ************************************ 00:18:35.489 15:55:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:35.489 15:55:32 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:35.489 15:55:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:35.489 15:55:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:35.489 15:55:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:35.489 ************************************ 00:18:35.489 START TEST nvmf_tls 00:18:35.489 ************************************ 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:35.489 * Looking for test storage... 00:18:35.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.489 15:55:32 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:35.490 15:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:37.388 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:18:37.389 Found 0000:84:00.0 (0x8086 - 0x159b) 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:18:37.389 Found 0000:84:00.1 (0x8086 - 0x159b) 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:18:37.389 Found net devices under 0000:84:00.0: cvl_0_0 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:18:37.389 Found net devices under 0000:84:00.1: cvl_0_1 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:37.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:37.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:18:37.389 00:18:37.389 --- 10.0.0.2 ping statistics --- 00:18:37.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.389 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:37.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:37.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:18:37.389 00:18:37.389 --- 10.0.0.1 ping statistics --- 00:18:37.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:37.389 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=776287 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 776287 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 776287 ']' 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:37.389 15:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.672 [2024-07-12 15:55:34.688500] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:18:37.672 [2024-07-12 15:55:34.688574] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.672 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.672 [2024-07-12 15:55:34.753150] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.672 [2024-07-12 15:55:34.856989] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.672 [2024-07-12 15:55:34.857063] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.672 [2024-07-12 15:55:34.857076] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.672 [2024-07-12 15:55:34.857087] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.672 [2024-07-12 15:55:34.857096] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.672 [2024-07-12 15:55:34.857121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.672 15:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.672 15:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:37.672 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:37.672 15:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:37.672 15:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.946 15:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.946 15:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:37.946 15:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:37.946 true 00:18:38.204 15:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:38.204 15:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:38.204 15:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:38.204 15:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:38.204 15:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:38.461 15:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:38.461 15:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:38.719 15:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:38.719 15:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:38.719 15:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:38.977 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:38.977 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:39.234 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:39.234 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:39.234 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:39.234 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:39.491 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:39.491 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:39.491 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:39.749 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:39.749 15:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:40.007 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:40.007 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:40.007 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:40.265 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:40.265 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.NusktBuLR7 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.iPgBSD1ZU1 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.NusktBuLR7 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.iPgBSD1ZU1 00:18:40.522 15:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:40.778 15:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:41.342 15:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.NusktBuLR7 00:18:41.342 15:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.NusktBuLR7 00:18:41.342 15:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:41.599 [2024-07-12 15:55:38.638106] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.599 15:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:41.855 15:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:41.855 [2024-07-12 15:55:39.135403] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.855 [2024-07-12 15:55:39.135640] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.112 15:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:42.370 malloc0 00:18:42.370 15:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:42.627 15:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NusktBuLR7 00:18:42.627 [2024-07-12 15:55:39.920297] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:42.885 15:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NusktBuLR7 00:18:42.885 EAL: No free 2048 kB hugepages reported on node 1 00:18:52.841 Initializing NVMe Controllers 00:18:52.841 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:52.841 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:52.841 Initialization complete. Launching workers. 00:18:52.841 ======================================================== 00:18:52.841 Latency(us) 00:18:52.841 Device Information : IOPS MiB/s Average min max 00:18:52.841 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8655.95 33.81 7392.10 5864.43 11728.32 00:18:52.841 ======================================================== 00:18:52.841 Total : 8655.95 33.81 7392.10 5864.43 11728.32 00:18:52.841 00:18:52.841 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NusktBuLR7 00:18:52.841 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:52.841 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:52.841 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:52.841 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NusktBuLR7' 00:18:52.841 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:52.842 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=778131 00:18:52.842 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:52.842 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 778131 /var/tmp/bdevperf.sock 00:18:52.842 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:52.842 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 778131 ']' 00:18:52.842 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:52.842 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.842 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:52.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:52.842 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.842 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.842 [2024-07-12 15:55:50.088822] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:18:52.842 [2024-07-12 15:55:50.088918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778131 ] 00:18:52.842 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.099 [2024-07-12 15:55:50.157647] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.100 [2024-07-12 15:55:50.273304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.100 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.100 15:55:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:53.100 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NusktBuLR7 00:18:53.665 [2024-07-12 15:55:50.658334] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:53.665 [2024-07-12 15:55:50.658434] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:53.665 TLSTESTn1 00:18:53.665 15:55:50 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:53.665 Running I/O for 10 seconds... 00:19:03.645 00:19:03.645 Latency(us) 00:19:03.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.645 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:03.645 Verification LBA range: start 0x0 length 0x2000 00:19:03.645 TLSTESTn1 : 10.02 3550.97 13.87 0.00 0.00 35985.90 8495.41 31457.28 00:19:03.645 =================================================================================================================== 00:19:03.645 Total : 3550.97 13.87 0.00 0.00 35985.90 8495.41 31457.28 00:19:03.645 0 00:19:03.645 15:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:03.645 15:56:00 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 778131 00:19:03.645 15:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 778131 ']' 00:19:03.645 15:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 778131 00:19:03.645 15:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:03.645 15:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:03.645 15:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 778131 00:19:03.902 15:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:03.902 15:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:03.902 15:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 778131' 00:19:03.902 killing process with pid 778131 00:19:03.902 15:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 778131 00:19:03.902 Received shutdown signal, test time was about 10.000000 seconds 00:19:03.902 00:19:03.902 Latency(us) 00:19:03.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.902 =================================================================================================================== 00:19:03.902 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.902 [2024-07-12 15:56:00.956396] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:03.902 15:56:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 778131 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iPgBSD1ZU1 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iPgBSD1ZU1 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iPgBSD1ZU1 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.iPgBSD1ZU1' 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=779383 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 779383 /var/tmp/bdevperf.sock 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 779383 ']' 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.160 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.160 [2024-07-12 15:56:01.271915] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:04.160 [2024-07-12 15:56:01.272005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779383 ] 00:19:04.160 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.160 [2024-07-12 15:56:01.338217] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.160 [2024-07-12 15:56:01.451074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.418 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:04.418 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:04.418 15:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.iPgBSD1ZU1 00:19:04.675 [2024-07-12 15:56:01.827115] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.675 [2024-07-12 15:56:01.827242] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:04.675 [2024-07-12 15:56:01.835406] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:04.675 [2024-07-12 15:56:01.835495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd34850 (107): Transport endpoint is not connected 00:19:04.676 [2024-07-12 15:56:01.836485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd34850 (9): Bad file descriptor 00:19:04.676 [2024-07-12 15:56:01.837484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:04.676 [2024-07-12 15:56:01.837504] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:04.676 [2024-07-12 15:56:01.837516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:04.676 request: 00:19:04.676 { 00:19:04.676 "name": "TLSTEST", 00:19:04.676 "trtype": "tcp", 00:19:04.676 "traddr": "10.0.0.2", 00:19:04.676 "adrfam": "ipv4", 00:19:04.676 "trsvcid": "4420", 00:19:04.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.676 "prchk_reftag": false, 00:19:04.676 "prchk_guard": false, 00:19:04.676 "hdgst": false, 00:19:04.676 "ddgst": false, 00:19:04.676 "psk": "/tmp/tmp.iPgBSD1ZU1", 00:19:04.676 "method": "bdev_nvme_attach_controller", 00:19:04.676 "req_id": 1 00:19:04.676 } 00:19:04.676 Got JSON-RPC error response 00:19:04.676 response: 00:19:04.676 { 00:19:04.676 "code": -5, 00:19:04.676 "message": "Input/output error" 00:19:04.676 } 00:19:04.676 15:56:01 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 779383 00:19:04.676 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 779383 ']' 00:19:04.676 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 779383 00:19:04.676 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:04.676 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:04.676 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779383 00:19:04.676 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:04.676 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:04.676 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779383' 00:19:04.676 killing process with pid 779383 00:19:04.676 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 779383 00:19:04.676 Received shutdown signal, test time was about 10.000000 seconds 00:19:04.676 00:19:04.676 Latency(us) 00:19:04.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.676 =================================================================================================================== 00:19:04.676 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:04.676 [2024-07-12 15:56:01.880208] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:04.676 15:56:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 779383 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NusktBuLR7 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NusktBuLR7 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NusktBuLR7 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NusktBuLR7' 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=779525 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 779525 /var/tmp/bdevperf.sock 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 779525 ']' 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:04.934 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.934 [2024-07-12 15:56:02.183678] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:04.934 [2024-07-12 15:56:02.183809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779525 ] 00:19:04.934 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.192 [2024-07-12 15:56:02.242469] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.192 [2024-07-12 15:56:02.344866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.192 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:05.192 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:05.192 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.NusktBuLR7 00:19:05.449 [2024-07-12 15:56:02.727172] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.449 [2024-07-12 15:56:02.727295] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:05.449 [2024-07-12 15:56:02.737566] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:05.449 [2024-07-12 15:56:02.737597] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:05.449 [2024-07-12 15:56:02.737636] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:05.449 [2024-07-12 15:56:02.738507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6d850 (107): Transport endpoint is not connected 00:19:05.449 [2024-07-12 15:56:02.739503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e6d850 (9): Bad file descriptor 00:19:05.449 [2024-07-12 15:56:02.740501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:05.449 [2024-07-12 15:56:02.740520] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:05.449 [2024-07-12 15:56:02.740532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:05.707 request: 00:19:05.707 { 00:19:05.707 "name": "TLSTEST", 00:19:05.707 "trtype": "tcp", 00:19:05.707 "traddr": "10.0.0.2", 00:19:05.707 "adrfam": "ipv4", 00:19:05.707 "trsvcid": "4420", 00:19:05.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.707 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:05.707 "prchk_reftag": false, 00:19:05.707 "prchk_guard": false, 00:19:05.707 "hdgst": false, 00:19:05.707 "ddgst": false, 00:19:05.707 "psk": "/tmp/tmp.NusktBuLR7", 00:19:05.707 "method": "bdev_nvme_attach_controller", 00:19:05.707 "req_id": 1 00:19:05.707 } 00:19:05.707 Got JSON-RPC error response 00:19:05.707 response: 00:19:05.707 { 00:19:05.707 "code": -5, 00:19:05.707 "message": "Input/output error" 00:19:05.707 } 00:19:05.707 15:56:02 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 779525 00:19:05.707 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 779525 ']' 00:19:05.707 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 779525 00:19:05.707 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:05.707 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:05.707 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779525 00:19:05.707 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:05.707 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:05.707 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779525' 00:19:05.707 killing process with pid 779525 00:19:05.707 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 779525 00:19:05.707 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.707 00:19:05.707 Latency(us) 00:19:05.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.707 =================================================================================================================== 00:19:05.707 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:05.707 [2024-07-12 15:56:02.791362] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:05.707 15:56:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 779525 00:19:05.964 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:05.964 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NusktBuLR7 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NusktBuLR7 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NusktBuLR7 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.NusktBuLR7' 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=779657 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 779657 /var/tmp/bdevperf.sock 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 779657 ']' 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:05.965 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.965 [2024-07-12 15:56:03.082646] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:05.965 [2024-07-12 15:56:03.082750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779657 ] 00:19:05.965 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.965 [2024-07-12 15:56:03.144329] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.965 [2024-07-12 15:56:03.252869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.222 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:06.222 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:06.222 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.NusktBuLR7 00:19:06.480 [2024-07-12 15:56:03.585866] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:06.480 [2024-07-12 15:56:03.585977] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:06.480 [2024-07-12 15:56:03.591559] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:06.480 [2024-07-12 15:56:03.591592] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:06.480 [2024-07-12 15:56:03.591653] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:06.480 [2024-07-12 15:56:03.592194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda5850 (107): Transport endpoint is not connected 00:19:06.480 [2024-07-12 15:56:03.593184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda5850 (9): Bad file descriptor 00:19:06.480 [2024-07-12 15:56:03.594183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:06.480 [2024-07-12 15:56:03.594206] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:06.480 [2024-07-12 15:56:03.594220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:06.480 request: 00:19:06.480 { 00:19:06.480 "name": "TLSTEST", 00:19:06.480 "trtype": "tcp", 00:19:06.480 "traddr": "10.0.0.2", 00:19:06.480 "adrfam": "ipv4", 00:19:06.480 "trsvcid": "4420", 00:19:06.480 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:06.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.480 "prchk_reftag": false, 00:19:06.480 "prchk_guard": false, 00:19:06.480 "hdgst": false, 00:19:06.480 "ddgst": false, 00:19:06.481 "psk": "/tmp/tmp.NusktBuLR7", 00:19:06.481 "method": "bdev_nvme_attach_controller", 00:19:06.481 "req_id": 1 00:19:06.481 } 00:19:06.481 Got JSON-RPC error response 00:19:06.481 response: 00:19:06.481 { 00:19:06.481 "code": -5, 00:19:06.481 "message": "Input/output error" 00:19:06.481 } 00:19:06.481 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 779657 00:19:06.481 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 779657 ']' 00:19:06.481 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 779657 00:19:06.481 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:06.481 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:06.481 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779657 00:19:06.481 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:06.481 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:06.481 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779657' 00:19:06.481 killing process with pid 779657 00:19:06.481 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 779657 00:19:06.481 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.481 00:19:06.481 Latency(us) 00:19:06.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.481 =================================================================================================================== 00:19:06.481 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:06.481 [2024-07-12 15:56:03.638442] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:06.481 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 779657 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=779790 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 779790 /var/tmp/bdevperf.sock 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 779790 ']' 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:06.739 15:56:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.739 [2024-07-12 15:56:03.941000] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:06.739 [2024-07-12 15:56:03.941099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779790 ] 00:19:06.739 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.739 [2024-07-12 15:56:03.997989] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.996 [2024-07-12 15:56:04.101198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.996 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:06.996 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:06.996 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:07.254 [2024-07-12 15:56:04.432016] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:07.254 [2024-07-12 15:56:04.433749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cdfb0 (9): Bad file descriptor 00:19:07.254 [2024-07-12 15:56:04.434744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:07.254 [2024-07-12 15:56:04.434783] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:07.254 [2024-07-12 15:56:04.434796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:07.254 request: 00:19:07.254 { 00:19:07.254 "name": "TLSTEST", 00:19:07.254 "trtype": "tcp", 00:19:07.254 "traddr": "10.0.0.2", 00:19:07.254 "adrfam": "ipv4", 00:19:07.254 "trsvcid": "4420", 00:19:07.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.254 "prchk_reftag": false, 00:19:07.254 "prchk_guard": false, 00:19:07.254 "hdgst": false, 00:19:07.254 "ddgst": false, 00:19:07.254 "method": "bdev_nvme_attach_controller", 00:19:07.254 "req_id": 1 00:19:07.254 } 00:19:07.254 Got JSON-RPC error response 00:19:07.254 response: 00:19:07.254 { 00:19:07.254 "code": -5, 00:19:07.254 "message": "Input/output error" 00:19:07.254 } 00:19:07.254 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 779790 00:19:07.254 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 779790 ']' 00:19:07.254 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 779790 00:19:07.254 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:07.254 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:07.254 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779790 00:19:07.254 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:07.254 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:07.254 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779790' 00:19:07.254 killing process with pid 779790 00:19:07.254 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 779790 00:19:07.254 Received shutdown signal, test time was about 10.000000 seconds 00:19:07.254 00:19:07.254 Latency(us) 00:19:07.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.254 =================================================================================================================== 00:19:07.254 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:07.254 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 779790 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 776287 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 776287 ']' 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 776287 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 776287 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 776287' 00:19:07.511 killing process with pid 776287 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 776287 00:19:07.511 [2024-07-12 15:56:04.745222] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:07.511 15:56:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 776287 00:19:07.794 15:56:04 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:07.794 15:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:07.794 15:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:07.794 15:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:07.794 15:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:07.794 15:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:07.794 15:56:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.yTLVvHckaJ 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.yTLVvHckaJ 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=779940 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 779940 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 779940 ']' 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:07.794 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.794 [2024-07-12 15:56:05.086707] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:07.794 [2024-07-12 15:56:05.086821] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.051 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.051 [2024-07-12 15:56:05.151521] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.051 [2024-07-12 15:56:05.261469] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.051 [2024-07-12 15:56:05.261543] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.051 [2024-07-12 15:56:05.261557] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.051 [2024-07-12 15:56:05.261568] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.051 [2024-07-12 15:56:05.261578] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.051 [2024-07-12 15:56:05.261610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.308 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.308 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:08.308 15:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.308 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.308 15:56:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.308 15:56:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.308 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.yTLVvHckaJ 00:19:08.308 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.yTLVvHckaJ 00:19:08.308 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:08.565 [2024-07-12 15:56:05.671869] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.565 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:08.822 15:56:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:09.079 [2024-07-12 15:56:06.229361] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:09.079 [2024-07-12 15:56:06.229587] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.079 15:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:09.336 malloc0 00:19:09.336 15:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:09.593 15:56:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yTLVvHckaJ 00:19:09.849 [2024-07-12 15:56:07.086550] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:09.849 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yTLVvHckaJ 00:19:09.849 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:09.849 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:09.849 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:09.849 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yTLVvHckaJ' 00:19:09.849 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:09.849 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=780225 00:19:09.849 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:09.849 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:09.849 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 780225 /var/tmp/bdevperf.sock 00:19:09.849 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 780225 ']' 00:19:09.849 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.850 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.850 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.850 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.850 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.106 [2024-07-12 15:56:07.150969] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:10.106 [2024-07-12 15:56:07.151065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780225 ] 00:19:10.106 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.106 [2024-07-12 15:56:07.210822] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.106 [2024-07-12 15:56:07.315480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.363 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.363 15:56:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:10.363 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yTLVvHckaJ 00:19:10.620 [2024-07-12 15:56:07.663425] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.620 [2024-07-12 15:56:07.663542] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:10.620 TLSTESTn1 00:19:10.620 15:56:07 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:10.620 Running I/O for 10 seconds... 00:19:20.640 00:19:20.640 Latency(us) 00:19:20.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.640 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:20.640 Verification LBA range: start 0x0 length 0x2000 00:19:20.640 TLSTESTn1 : 10.02 3526.06 13.77 0.00 0.00 36242.08 9077.95 36505.98 00:19:20.640 =================================================================================================================== 00:19:20.641 Total : 3526.06 13.77 0.00 0.00 36242.08 9077.95 36505.98 00:19:20.641 0 00:19:20.641 15:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:20.641 15:56:17 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 780225 00:19:20.641 15:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 780225 ']' 00:19:20.641 15:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 780225 00:19:20.641 15:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:20.641 15:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:20.641 15:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 780225 00:19:20.898 15:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:20.898 15:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:20.898 15:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 780225' 00:19:20.898 killing process with pid 780225 00:19:20.898 15:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 780225 00:19:20.898 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.898 00:19:20.898 Latency(us) 00:19:20.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.898 =================================================================================================================== 00:19:20.898 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.898 [2024-07-12 15:56:17.942127] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:20.898 15:56:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 780225 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.yTLVvHckaJ 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yTLVvHckaJ 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yTLVvHckaJ 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yTLVvHckaJ 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yTLVvHckaJ' 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=781433 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 781433 /var/tmp/bdevperf.sock 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 781433 ']' 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:21.156 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:21.156 [2024-07-12 15:56:18.251620] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:21.156 [2024-07-12 15:56:18.251722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781433 ] 00:19:21.156 EAL: No free 2048 kB hugepages reported on node 1 00:19:21.156 [2024-07-12 15:56:18.315783] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.156 [2024-07-12 15:56:18.424968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.414 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.414 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:21.414 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yTLVvHckaJ 00:19:21.672 [2024-07-12 15:56:18.810623] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.672 [2024-07-12 15:56:18.810705] bdev_nvme.c:6130:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:21.672 [2024-07-12 15:56:18.810719] bdev_nvme.c:6235:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.yTLVvHckaJ 00:19:21.672 request: 00:19:21.672 { 00:19:21.672 "name": "TLSTEST", 00:19:21.672 "trtype": "tcp", 00:19:21.672 "traddr": "10.0.0.2", 00:19:21.672 "adrfam": "ipv4", 00:19:21.672 "trsvcid": "4420", 00:19:21.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:21.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:21.672 "prchk_reftag": false, 00:19:21.672 "prchk_guard": false, 00:19:21.672 "hdgst": false, 00:19:21.672 "ddgst": false, 00:19:21.672 "psk": "/tmp/tmp.yTLVvHckaJ", 00:19:21.672 "method": "bdev_nvme_attach_controller", 00:19:21.672 "req_id": 1 00:19:21.672 } 00:19:21.672 Got JSON-RPC error response 00:19:21.672 response: 00:19:21.672 { 00:19:21.672 "code": -1, 00:19:21.672 "message": "Operation not permitted" 00:19:21.672 } 00:19:21.672 15:56:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 781433 00:19:21.672 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 781433 ']' 00:19:21.672 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 781433 00:19:21.672 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:21.672 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.672 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781433 00:19:21.672 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:21.672 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:21.672 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781433' 00:19:21.672 killing process with pid 781433 00:19:21.672 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 781433 00:19:21.672 Received shutdown signal, test time was about 10.000000 seconds 00:19:21.672 00:19:21.672 Latency(us) 00:19:21.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.672 =================================================================================================================== 00:19:21.672 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:21.672 15:56:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 781433 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 779940 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 779940 ']' 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 779940 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779940 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779940' 00:19:21.929 killing process with pid 779940 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 779940 00:19:21.929 [2024-07-12 15:56:19.139945] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:21.929 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 779940 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=781645 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 781645 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 781645 ']' 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.187 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.187 [2024-07-12 15:56:19.475907] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:22.187 [2024-07-12 15:56:19.476006] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.446 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.446 [2024-07-12 15:56:19.540184] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.446 [2024-07-12 15:56:19.639553] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.446 [2024-07-12 15:56:19.639626] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.446 [2024-07-12 15:56:19.639649] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.446 [2024-07-12 15:56:19.639659] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.446 [2024-07-12 15:56:19.639669] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.446 [2024-07-12 15:56:19.639701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.704 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.704 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:22.704 15:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.704 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:22.704 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.704 15:56:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.704 15:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.yTLVvHckaJ 00:19:22.704 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:22.704 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.yTLVvHckaJ 00:19:22.704 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:22.704 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.704 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:22.705 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:22.705 15:56:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.yTLVvHckaJ 00:19:22.705 15:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.yTLVvHckaJ 00:19:22.705 15:56:19 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:22.962 [2024-07-12 15:56:20.026940] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.962 15:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:23.220 15:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:23.480 [2024-07-12 15:56:20.536292] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:23.480 [2024-07-12 15:56:20.536546] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.480 15:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:23.738 malloc0 00:19:23.738 15:56:20 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:23.996 15:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yTLVvHckaJ 00:19:24.255 [2024-07-12 15:56:21.365909] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:24.255 [2024-07-12 15:56:21.365948] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:24.255 [2024-07-12 15:56:21.365979] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:24.255 request: 00:19:24.255 { 00:19:24.255 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:24.255 "host": "nqn.2016-06.io.spdk:host1", 00:19:24.255 "psk": "/tmp/tmp.yTLVvHckaJ", 00:19:24.255 "method": "nvmf_subsystem_add_host", 00:19:24.255 "req_id": 1 00:19:24.255 } 00:19:24.255 Got JSON-RPC error response 00:19:24.255 response: 00:19:24.255 { 00:19:24.255 "code": -32603, 00:19:24.255 "message": "Internal error" 00:19:24.255 } 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 781645 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 781645 ']' 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 781645 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781645 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781645' 00:19:24.255 killing process with pid 781645 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 781645 00:19:24.255 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 781645 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.yTLVvHckaJ 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=781912 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 781912 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 781912 ']' 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.513 15:56:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.513 [2024-07-12 15:56:21.745243] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:24.513 [2024-07-12 15:56:21.745334] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.513 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.770 [2024-07-12 15:56:21.809138] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.770 [2024-07-12 15:56:21.911689] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.770 [2024-07-12 15:56:21.911748] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.770 [2024-07-12 15:56:21.911773] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.770 [2024-07-12 15:56:21.911784] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.770 [2024-07-12 15:56:21.911794] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.770 [2024-07-12 15:56:21.911821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.770 15:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.770 15:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:24.770 15:56:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:24.770 15:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:24.770 15:56:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.770 15:56:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.770 15:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.yTLVvHckaJ 00:19:24.770 15:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.yTLVvHckaJ 00:19:24.770 15:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:25.335 [2024-07-12 15:56:22.329277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.335 15:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:25.335 15:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:25.593 [2024-07-12 15:56:22.834573] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:25.593 [2024-07-12 15:56:22.834829] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.593 15:56:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:25.850 malloc0 00:19:25.850 15:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:26.107 15:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yTLVvHckaJ 00:19:26.365 [2024-07-12 15:56:23.578971] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:26.365 15:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=782156 00:19:26.365 15:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.365 15:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.365 15:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 782156 /var/tmp/bdevperf.sock 00:19:26.365 15:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 782156 ']' 00:19:26.365 15:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.365 15:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:26.365 15:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.365 15:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:26.365 15:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.365 [2024-07-12 15:56:23.636475] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:26.365 [2024-07-12 15:56:23.636560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782156 ] 00:19:26.622 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.622 [2024-07-12 15:56:23.697733] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.622 [2024-07-12 15:56:23.806875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.882 15:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:26.882 15:56:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:26.882 15:56:23 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yTLVvHckaJ 00:19:26.882 [2024-07-12 15:56:24.152136] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.882 [2024-07-12 15:56:24.152254] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:27.141 TLSTESTn1 00:19:27.141 15:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:27.400 15:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:27.400 "subsystems": [ 00:19:27.400 { 00:19:27.400 "subsystem": "keyring", 00:19:27.400 "config": [] 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "subsystem": "iobuf", 00:19:27.400 "config": [ 00:19:27.400 { 00:19:27.400 "method": "iobuf_set_options", 00:19:27.400 "params": { 00:19:27.400 "small_pool_count": 8192, 00:19:27.400 "large_pool_count": 1024, 00:19:27.400 "small_bufsize": 8192, 00:19:27.400 "large_bufsize": 135168 00:19:27.400 } 00:19:27.400 } 00:19:27.400 ] 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "subsystem": "sock", 00:19:27.400 "config": [ 00:19:27.400 { 00:19:27.400 "method": "sock_set_default_impl", 00:19:27.400 "params": { 00:19:27.400 "impl_name": "posix" 00:19:27.400 } 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "method": "sock_impl_set_options", 00:19:27.400 "params": { 00:19:27.400 "impl_name": "ssl", 00:19:27.400 "recv_buf_size": 4096, 00:19:27.400 "send_buf_size": 4096, 00:19:27.400 "enable_recv_pipe": true, 00:19:27.400 "enable_quickack": false, 00:19:27.400 "enable_placement_id": 0, 00:19:27.400 "enable_zerocopy_send_server": true, 00:19:27.400 "enable_zerocopy_send_client": false, 00:19:27.400 "zerocopy_threshold": 0, 00:19:27.400 "tls_version": 0, 00:19:27.400 "enable_ktls": false 00:19:27.400 } 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "method": "sock_impl_set_options", 00:19:27.400 "params": { 00:19:27.400 "impl_name": "posix", 00:19:27.400 "recv_buf_size": 2097152, 00:19:27.400 "send_buf_size": 2097152, 00:19:27.400 "enable_recv_pipe": true, 00:19:27.400 "enable_quickack": false, 00:19:27.400 "enable_placement_id": 0, 00:19:27.400 "enable_zerocopy_send_server": true, 00:19:27.400 "enable_zerocopy_send_client": false, 00:19:27.400 "zerocopy_threshold": 0, 00:19:27.400 "tls_version": 0, 00:19:27.400 "enable_ktls": false 00:19:27.400 } 00:19:27.400 } 00:19:27.400 ] 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "subsystem": "vmd", 00:19:27.400 "config": [] 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "subsystem": "accel", 00:19:27.400 "config": [ 00:19:27.400 { 00:19:27.400 "method": "accel_set_options", 00:19:27.400 "params": { 00:19:27.400 "small_cache_size": 128, 00:19:27.400 "large_cache_size": 16, 00:19:27.400 "task_count": 2048, 00:19:27.400 "sequence_count": 2048, 00:19:27.400 "buf_count": 2048 00:19:27.400 } 00:19:27.400 } 00:19:27.400 ] 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "subsystem": "bdev", 00:19:27.400 "config": [ 00:19:27.400 { 00:19:27.400 "method": "bdev_set_options", 00:19:27.400 "params": { 00:19:27.400 "bdev_io_pool_size": 65535, 00:19:27.400 "bdev_io_cache_size": 256, 00:19:27.400 "bdev_auto_examine": true, 00:19:27.400 "iobuf_small_cache_size": 128, 00:19:27.400 "iobuf_large_cache_size": 16 00:19:27.400 } 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "method": "bdev_raid_set_options", 00:19:27.400 "params": { 00:19:27.400 "process_window_size_kb": 1024 00:19:27.400 } 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "method": "bdev_iscsi_set_options", 00:19:27.400 "params": { 00:19:27.400 "timeout_sec": 30 00:19:27.400 } 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "method": "bdev_nvme_set_options", 00:19:27.400 "params": { 00:19:27.400 "action_on_timeout": "none", 00:19:27.400 "timeout_us": 0, 00:19:27.400 "timeout_admin_us": 0, 00:19:27.400 "keep_alive_timeout_ms": 10000, 00:19:27.400 "arbitration_burst": 0, 00:19:27.400 "low_priority_weight": 0, 00:19:27.400 "medium_priority_weight": 0, 00:19:27.400 "high_priority_weight": 0, 00:19:27.400 "nvme_adminq_poll_period_us": 10000, 00:19:27.400 "nvme_ioq_poll_period_us": 0, 00:19:27.400 "io_queue_requests": 0, 00:19:27.400 "delay_cmd_submit": true, 00:19:27.400 "transport_retry_count": 4, 00:19:27.400 "bdev_retry_count": 3, 00:19:27.400 "transport_ack_timeout": 0, 00:19:27.400 "ctrlr_loss_timeout_sec": 0, 00:19:27.400 "reconnect_delay_sec": 0, 00:19:27.400 "fast_io_fail_timeout_sec": 0, 00:19:27.400 "disable_auto_failback": false, 00:19:27.400 "generate_uuids": false, 00:19:27.400 "transport_tos": 0, 00:19:27.400 "nvme_error_stat": false, 00:19:27.400 "rdma_srq_size": 0, 00:19:27.400 "io_path_stat": false, 00:19:27.400 "allow_accel_sequence": false, 00:19:27.400 "rdma_max_cq_size": 0, 00:19:27.400 "rdma_cm_event_timeout_ms": 0, 00:19:27.400 "dhchap_digests": [ 00:19:27.400 "sha256", 00:19:27.400 "sha384", 00:19:27.400 "sha512" 00:19:27.400 ], 00:19:27.400 "dhchap_dhgroups": [ 00:19:27.400 "null", 00:19:27.400 "ffdhe2048", 00:19:27.400 "ffdhe3072", 00:19:27.400 "ffdhe4096", 00:19:27.400 "ffdhe6144", 00:19:27.400 "ffdhe8192" 00:19:27.400 ] 00:19:27.400 } 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "method": "bdev_nvme_set_hotplug", 00:19:27.400 "params": { 00:19:27.400 "period_us": 100000, 00:19:27.400 "enable": false 00:19:27.400 } 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "method": "bdev_malloc_create", 00:19:27.400 "params": { 00:19:27.400 "name": "malloc0", 00:19:27.400 "num_blocks": 8192, 00:19:27.400 "block_size": 4096, 00:19:27.400 "physical_block_size": 4096, 00:19:27.400 "uuid": "9ed2432f-3b69-4256-9870-792be8f1aab0", 00:19:27.400 "optimal_io_boundary": 0 00:19:27.400 } 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "method": "bdev_wait_for_examine" 00:19:27.400 } 00:19:27.400 ] 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "subsystem": "nbd", 00:19:27.400 "config": [] 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "subsystem": "scheduler", 00:19:27.400 "config": [ 00:19:27.400 { 00:19:27.400 "method": "framework_set_scheduler", 00:19:27.400 "params": { 00:19:27.400 "name": "static" 00:19:27.400 } 00:19:27.400 } 00:19:27.400 ] 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "subsystem": "nvmf", 00:19:27.400 "config": [ 00:19:27.400 { 00:19:27.400 "method": "nvmf_set_config", 00:19:27.400 "params": { 00:19:27.400 "discovery_filter": "match_any", 00:19:27.400 "admin_cmd_passthru": { 00:19:27.400 "identify_ctrlr": false 00:19:27.400 } 00:19:27.400 } 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "method": "nvmf_set_max_subsystems", 00:19:27.400 "params": { 00:19:27.400 "max_subsystems": 1024 00:19:27.400 } 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "method": "nvmf_set_crdt", 00:19:27.400 "params": { 00:19:27.400 "crdt1": 0, 00:19:27.400 "crdt2": 0, 00:19:27.400 "crdt3": 0 00:19:27.400 } 00:19:27.400 }, 00:19:27.400 { 00:19:27.400 "method": "nvmf_create_transport", 00:19:27.400 "params": { 00:19:27.400 "trtype": "TCP", 00:19:27.400 "max_queue_depth": 128, 00:19:27.400 "max_io_qpairs_per_ctrlr": 127, 00:19:27.400 "in_capsule_data_size": 4096, 00:19:27.400 "max_io_size": 131072, 00:19:27.400 "io_unit_size": 131072, 00:19:27.400 "max_aq_depth": 128, 00:19:27.400 "num_shared_buffers": 511, 00:19:27.400 "buf_cache_size": 4294967295, 00:19:27.401 "dif_insert_or_strip": false, 00:19:27.401 "zcopy": false, 00:19:27.401 "c2h_success": false, 00:19:27.401 "sock_priority": 0, 00:19:27.401 "abort_timeout_sec": 1, 00:19:27.401 "ack_timeout": 0, 00:19:27.401 "data_wr_pool_size": 0 00:19:27.401 } 00:19:27.401 }, 00:19:27.401 { 00:19:27.401 "method": "nvmf_create_subsystem", 00:19:27.401 "params": { 00:19:27.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.401 "allow_any_host": false, 00:19:27.401 "serial_number": "SPDK00000000000001", 00:19:27.401 "model_number": "SPDK bdev Controller", 00:19:27.401 "max_namespaces": 10, 00:19:27.401 "min_cntlid": 1, 00:19:27.401 "max_cntlid": 65519, 00:19:27.401 "ana_reporting": false 00:19:27.401 } 00:19:27.401 }, 00:19:27.401 { 00:19:27.401 "method": "nvmf_subsystem_add_host", 00:19:27.401 "params": { 00:19:27.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.401 "host": "nqn.2016-06.io.spdk:host1", 00:19:27.401 "psk": "/tmp/tmp.yTLVvHckaJ" 00:19:27.401 } 00:19:27.401 }, 00:19:27.401 { 00:19:27.401 "method": "nvmf_subsystem_add_ns", 00:19:27.401 "params": { 00:19:27.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.401 "namespace": { 00:19:27.401 "nsid": 1, 00:19:27.401 "bdev_name": "malloc0", 00:19:27.401 "nguid": "9ED2432F3B6942569870792BE8F1AAB0", 00:19:27.401 "uuid": "9ed2432f-3b69-4256-9870-792be8f1aab0", 00:19:27.401 "no_auto_visible": false 00:19:27.401 } 00:19:27.401 } 00:19:27.401 }, 00:19:27.401 { 00:19:27.401 "method": "nvmf_subsystem_add_listener", 00:19:27.401 "params": { 00:19:27.401 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.401 "listen_address": { 00:19:27.401 "trtype": "TCP", 00:19:27.401 "adrfam": "IPv4", 00:19:27.401 "traddr": "10.0.0.2", 00:19:27.401 "trsvcid": "4420" 00:19:27.401 }, 00:19:27.401 "secure_channel": true 00:19:27.401 } 00:19:27.401 } 00:19:27.401 ] 00:19:27.401 } 00:19:27.401 ] 00:19:27.401 }' 00:19:27.401 15:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:27.660 15:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:27.660 "subsystems": [ 00:19:27.660 { 00:19:27.660 "subsystem": "keyring", 00:19:27.660 "config": [] 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "subsystem": "iobuf", 00:19:27.660 "config": [ 00:19:27.660 { 00:19:27.660 "method": "iobuf_set_options", 00:19:27.660 "params": { 00:19:27.660 "small_pool_count": 8192, 00:19:27.660 "large_pool_count": 1024, 00:19:27.660 "small_bufsize": 8192, 00:19:27.660 "large_bufsize": 135168 00:19:27.660 } 00:19:27.660 } 00:19:27.660 ] 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "subsystem": "sock", 00:19:27.660 "config": [ 00:19:27.660 { 00:19:27.660 "method": "sock_set_default_impl", 00:19:27.660 "params": { 00:19:27.660 "impl_name": "posix" 00:19:27.660 } 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "method": "sock_impl_set_options", 00:19:27.660 "params": { 00:19:27.660 "impl_name": "ssl", 00:19:27.660 "recv_buf_size": 4096, 00:19:27.660 "send_buf_size": 4096, 00:19:27.660 "enable_recv_pipe": true, 00:19:27.660 "enable_quickack": false, 00:19:27.660 "enable_placement_id": 0, 00:19:27.660 "enable_zerocopy_send_server": true, 00:19:27.660 "enable_zerocopy_send_client": false, 00:19:27.660 "zerocopy_threshold": 0, 00:19:27.660 "tls_version": 0, 00:19:27.660 "enable_ktls": false 00:19:27.660 } 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "method": "sock_impl_set_options", 00:19:27.660 "params": { 00:19:27.660 "impl_name": "posix", 00:19:27.660 "recv_buf_size": 2097152, 00:19:27.660 "send_buf_size": 2097152, 00:19:27.660 "enable_recv_pipe": true, 00:19:27.660 "enable_quickack": false, 00:19:27.660 "enable_placement_id": 0, 00:19:27.660 "enable_zerocopy_send_server": true, 00:19:27.660 "enable_zerocopy_send_client": false, 00:19:27.660 "zerocopy_threshold": 0, 00:19:27.660 "tls_version": 0, 00:19:27.660 "enable_ktls": false 00:19:27.660 } 00:19:27.660 } 00:19:27.660 ] 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "subsystem": "vmd", 00:19:27.660 "config": [] 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "subsystem": "accel", 00:19:27.660 "config": [ 00:19:27.660 { 00:19:27.660 "method": "accel_set_options", 00:19:27.660 "params": { 00:19:27.660 "small_cache_size": 128, 00:19:27.660 "large_cache_size": 16, 00:19:27.660 "task_count": 2048, 00:19:27.660 "sequence_count": 2048, 00:19:27.660 "buf_count": 2048 00:19:27.660 } 00:19:27.660 } 00:19:27.660 ] 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "subsystem": "bdev", 00:19:27.660 "config": [ 00:19:27.660 { 00:19:27.660 "method": "bdev_set_options", 00:19:27.660 "params": { 00:19:27.660 "bdev_io_pool_size": 65535, 00:19:27.660 "bdev_io_cache_size": 256, 00:19:27.660 "bdev_auto_examine": true, 00:19:27.660 "iobuf_small_cache_size": 128, 00:19:27.660 "iobuf_large_cache_size": 16 00:19:27.660 } 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "method": "bdev_raid_set_options", 00:19:27.660 "params": { 00:19:27.660 "process_window_size_kb": 1024 00:19:27.660 } 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "method": "bdev_iscsi_set_options", 00:19:27.660 "params": { 00:19:27.660 "timeout_sec": 30 00:19:27.660 } 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "method": "bdev_nvme_set_options", 00:19:27.660 "params": { 00:19:27.660 "action_on_timeout": "none", 00:19:27.660 "timeout_us": 0, 00:19:27.660 "timeout_admin_us": 0, 00:19:27.660 "keep_alive_timeout_ms": 10000, 00:19:27.660 "arbitration_burst": 0, 00:19:27.660 "low_priority_weight": 0, 00:19:27.660 "medium_priority_weight": 0, 00:19:27.660 "high_priority_weight": 0, 00:19:27.660 "nvme_adminq_poll_period_us": 10000, 00:19:27.660 "nvme_ioq_poll_period_us": 0, 00:19:27.660 "io_queue_requests": 512, 00:19:27.660 "delay_cmd_submit": true, 00:19:27.660 "transport_retry_count": 4, 00:19:27.660 "bdev_retry_count": 3, 00:19:27.660 "transport_ack_timeout": 0, 00:19:27.660 "ctrlr_loss_timeout_sec": 0, 00:19:27.660 "reconnect_delay_sec": 0, 00:19:27.660 "fast_io_fail_timeout_sec": 0, 00:19:27.660 "disable_auto_failback": false, 00:19:27.660 "generate_uuids": false, 00:19:27.660 "transport_tos": 0, 00:19:27.660 "nvme_error_stat": false, 00:19:27.660 "rdma_srq_size": 0, 00:19:27.660 "io_path_stat": false, 00:19:27.660 "allow_accel_sequence": false, 00:19:27.660 "rdma_max_cq_size": 0, 00:19:27.660 "rdma_cm_event_timeout_ms": 0, 00:19:27.660 "dhchap_digests": [ 00:19:27.660 "sha256", 00:19:27.660 "sha384", 00:19:27.660 "sha512" 00:19:27.660 ], 00:19:27.660 "dhchap_dhgroups": [ 00:19:27.660 "null", 00:19:27.660 "ffdhe2048", 00:19:27.660 "ffdhe3072", 00:19:27.660 "ffdhe4096", 00:19:27.660 "ffdhe6144", 00:19:27.660 "ffdhe8192" 00:19:27.660 ] 00:19:27.660 } 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "method": "bdev_nvme_attach_controller", 00:19:27.660 "params": { 00:19:27.660 "name": "TLSTEST", 00:19:27.660 "trtype": "TCP", 00:19:27.660 "adrfam": "IPv4", 00:19:27.660 "traddr": "10.0.0.2", 00:19:27.660 "trsvcid": "4420", 00:19:27.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:27.660 "prchk_reftag": false, 00:19:27.660 "prchk_guard": false, 00:19:27.660 "ctrlr_loss_timeout_sec": 0, 00:19:27.660 "reconnect_delay_sec": 0, 00:19:27.660 "fast_io_fail_timeout_sec": 0, 00:19:27.660 "psk": "/tmp/tmp.yTLVvHckaJ", 00:19:27.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:27.660 "hdgst": false, 00:19:27.660 "ddgst": false 00:19:27.660 } 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "method": "bdev_nvme_set_hotplug", 00:19:27.660 "params": { 00:19:27.660 "period_us": 100000, 00:19:27.660 "enable": false 00:19:27.660 } 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "method": "bdev_wait_for_examine" 00:19:27.660 } 00:19:27.660 ] 00:19:27.660 }, 00:19:27.660 { 00:19:27.660 "subsystem": "nbd", 00:19:27.660 "config": [] 00:19:27.660 } 00:19:27.660 ] 00:19:27.660 }' 00:19:27.660 15:56:24 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 782156 00:19:27.660 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 782156 ']' 00:19:27.660 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 782156 00:19:27.660 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:27.660 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:27.660 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782156 00:19:27.920 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:27.920 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:27.920 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782156' 00:19:27.920 killing process with pid 782156 00:19:27.920 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 782156 00:19:27.920 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.920 00:19:27.920 Latency(us) 00:19:27.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.920 =================================================================================================================== 00:19:27.920 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:27.920 [2024-07-12 15:56:24.971187] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:27.920 15:56:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 782156 00:19:28.180 15:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 781912 00:19:28.180 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 781912 ']' 00:19:28.180 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 781912 00:19:28.180 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:28.180 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.180 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781912 00:19:28.180 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:28.180 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:28.180 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781912' 00:19:28.180 killing process with pid 781912 00:19:28.180 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 781912 00:19:28.180 [2024-07-12 15:56:25.258242] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:28.180 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 781912 00:19:28.439 15:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:28.439 15:56:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.439 15:56:25 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:28.439 "subsystems": [ 00:19:28.439 { 00:19:28.439 "subsystem": "keyring", 00:19:28.439 "config": [] 00:19:28.439 }, 00:19:28.439 { 00:19:28.439 "subsystem": "iobuf", 00:19:28.439 "config": [ 00:19:28.439 { 00:19:28.439 "method": "iobuf_set_options", 00:19:28.439 "params": { 00:19:28.439 "small_pool_count": 8192, 00:19:28.439 "large_pool_count": 1024, 00:19:28.439 "small_bufsize": 8192, 00:19:28.439 "large_bufsize": 135168 00:19:28.439 } 00:19:28.439 } 00:19:28.439 ] 00:19:28.439 }, 00:19:28.439 { 00:19:28.439 "subsystem": "sock", 00:19:28.439 "config": [ 00:19:28.439 { 00:19:28.439 "method": "sock_set_default_impl", 00:19:28.439 "params": { 00:19:28.439 "impl_name": "posix" 00:19:28.439 } 00:19:28.439 }, 00:19:28.439 { 00:19:28.439 "method": "sock_impl_set_options", 00:19:28.439 "params": { 00:19:28.439 "impl_name": "ssl", 00:19:28.439 "recv_buf_size": 4096, 00:19:28.439 "send_buf_size": 4096, 00:19:28.439 "enable_recv_pipe": true, 00:19:28.439 "enable_quickack": false, 00:19:28.439 "enable_placement_id": 0, 00:19:28.439 "enable_zerocopy_send_server": true, 00:19:28.439 "enable_zerocopy_send_client": false, 00:19:28.439 "zerocopy_threshold": 0, 00:19:28.439 "tls_version": 0, 00:19:28.439 "enable_ktls": false 00:19:28.439 } 00:19:28.439 }, 00:19:28.439 { 00:19:28.439 "method": "sock_impl_set_options", 00:19:28.439 "params": { 00:19:28.439 "impl_name": "posix", 00:19:28.439 "recv_buf_size": 2097152, 00:19:28.439 "send_buf_size": 2097152, 00:19:28.439 "enable_recv_pipe": true, 00:19:28.439 "enable_quickack": false, 00:19:28.439 "enable_placement_id": 0, 00:19:28.439 "enable_zerocopy_send_server": true, 00:19:28.439 "enable_zerocopy_send_client": false, 00:19:28.439 "zerocopy_threshold": 0, 00:19:28.439 "tls_version": 0, 00:19:28.439 "enable_ktls": false 00:19:28.439 } 00:19:28.439 } 00:19:28.439 ] 00:19:28.439 }, 00:19:28.439 { 00:19:28.439 "subsystem": "vmd", 00:19:28.439 "config": [] 00:19:28.439 }, 00:19:28.439 { 00:19:28.439 "subsystem": "accel", 00:19:28.439 "config": [ 00:19:28.439 { 00:19:28.439 "method": "accel_set_options", 00:19:28.439 "params": { 00:19:28.439 "small_cache_size": 128, 00:19:28.439 "large_cache_size": 16, 00:19:28.439 "task_count": 2048, 00:19:28.439 "sequence_count": 2048, 00:19:28.439 "buf_count": 2048 00:19:28.439 } 00:19:28.439 } 00:19:28.439 ] 00:19:28.439 }, 00:19:28.439 { 00:19:28.439 "subsystem": "bdev", 00:19:28.439 "config": [ 00:19:28.439 { 00:19:28.439 "method": "bdev_set_options", 00:19:28.439 "params": { 00:19:28.439 "bdev_io_pool_size": 65535, 00:19:28.439 "bdev_io_cache_size": 256, 00:19:28.439 "bdev_auto_examine": true, 00:19:28.439 "iobuf_small_cache_size": 128, 00:19:28.439 "iobuf_large_cache_size": 16 00:19:28.439 } 00:19:28.439 }, 00:19:28.439 { 00:19:28.439 "method": "bdev_raid_set_options", 00:19:28.439 "params": { 00:19:28.439 "process_window_size_kb": 1024 00:19:28.439 } 00:19:28.439 }, 00:19:28.439 { 00:19:28.439 "method": "bdev_iscsi_set_options", 00:19:28.439 "params": { 00:19:28.439 "timeout_sec": 30 00:19:28.439 } 00:19:28.439 }, 00:19:28.439 { 00:19:28.439 "method": "bdev_nvme_set_options", 00:19:28.439 "params": { 00:19:28.439 "action_on_timeout": "none", 00:19:28.439 "timeout_us": 0, 00:19:28.439 "timeout_admin_us": 0, 00:19:28.439 "keep_alive_timeout_ms": 10000, 00:19:28.439 "arbitration_burst": 0, 00:19:28.439 "low_priority_weight": 0, 00:19:28.439 "medium_priority_weight": 0, 00:19:28.439 "high_priority_weight": 0, 00:19:28.439 "nvme_adminq_poll_period_us": 10000, 00:19:28.439 "nvme_ioq_poll_period_us": 0, 00:19:28.439 "io_queue_requests": 0, 00:19:28.439 "delay_cmd_submit": true, 00:19:28.439 "transport_retry_count": 4, 00:19:28.439 "bdev_retry_count": 3, 00:19:28.439 "transport_ack_timeout": 0, 00:19:28.439 "ctrlr_loss_timeout_sec": 0, 00:19:28.439 "reconnect_delay_sec": 0, 00:19:28.439 "fast_io_fail_timeout_sec": 0, 00:19:28.439 "disable_auto_failback": false, 00:19:28.439 "generate_uuids": false, 00:19:28.439 "transport_tos": 0, 00:19:28.439 "nvme_error_stat": false, 00:19:28.439 "rdma_srq_size": 0, 00:19:28.439 "io_path_stat": false, 00:19:28.439 "allow_accel_sequence": false, 00:19:28.439 "rdma_max_cq_size": 0, 00:19:28.439 "rdma_cm_event_timeout_ms": 0, 00:19:28.439 "dhchap_digests": [ 00:19:28.439 "sha256", 00:19:28.439 "sha384", 00:19:28.439 "sha512" 00:19:28.439 ], 00:19:28.439 "dhchap_dhgroups": [ 00:19:28.439 "null", 00:19:28.439 "ffdhe2048", 00:19:28.439 "ffdhe3072", 00:19:28.439 "ffdhe4096", 00:19:28.439 "ffdhe6144", 00:19:28.439 "ffdhe8192" 00:19:28.439 ] 00:19:28.439 } 00:19:28.439 }, 00:19:28.439 { 00:19:28.439 "method": "bdev_nvme_set_hotplug", 00:19:28.439 "params": { 00:19:28.439 "period_us": 100000, 00:19:28.439 "enable": false 00:19:28.439 } 00:19:28.439 }, 00:19:28.439 { 00:19:28.439 "method": "bdev_malloc_create", 00:19:28.439 "params": { 00:19:28.439 "name": "malloc0", 00:19:28.440 "num_blocks": 8192, 00:19:28.440 "block_size": 4096, 00:19:28.440 "physical_block_size": 4096, 00:19:28.440 "uuid": "9ed2432f-3b69-4256-9870-792be8f1aab0", 00:19:28.440 "optimal_io_boundary": 0 00:19:28.440 } 00:19:28.440 }, 00:19:28.440 { 00:19:28.440 "method": "bdev_wait_for_examine" 00:19:28.440 } 00:19:28.440 ] 00:19:28.440 }, 00:19:28.440 { 00:19:28.440 "subsystem": "nbd", 00:19:28.440 "config": [] 00:19:28.440 }, 00:19:28.440 { 00:19:28.440 "subsystem": "scheduler", 00:19:28.440 "config": [ 00:19:28.440 { 00:19:28.440 "method": "framework_set_scheduler", 00:19:28.440 "params": { 00:19:28.440 "name": "static" 00:19:28.440 } 00:19:28.440 } 00:19:28.440 ] 00:19:28.440 }, 00:19:28.440 { 00:19:28.440 "subsystem": "nvmf", 00:19:28.440 "config": [ 00:19:28.440 { 00:19:28.440 "method": "nvmf_set_config", 00:19:28.440 "params": { 00:19:28.440 "discovery_filter": "match_any", 00:19:28.440 "admin_cmd_passthru": { 00:19:28.440 "identify_ctrlr": false 00:19:28.440 } 00:19:28.440 } 00:19:28.440 }, 00:19:28.440 { 00:19:28.440 "method": "nvmf_set_max_subsystems", 00:19:28.440 "params": { 00:19:28.440 "max_subsystems": 1024 00:19:28.440 } 00:19:28.440 }, 00:19:28.440 { 00:19:28.440 "method": "nvmf_set_crdt", 00:19:28.440 "params": { 00:19:28.440 "crdt1": 0, 00:19:28.440 "crdt2": 0, 00:19:28.440 "crdt3": 0 00:19:28.440 } 00:19:28.440 }, 00:19:28.440 { 00:19:28.440 "method": "nvmf_create_transport", 00:19:28.440 "params": { 00:19:28.440 "trtype": "TCP", 00:19:28.440 "max_queue_depth": 128, 00:19:28.440 "max_io_qpairs_per_ctrlr": 127, 00:19:28.440 "in_capsule_data_size": 4096, 00:19:28.440 "max_io_size": 131072, 00:19:28.440 "io_unit_size": 131072, 00:19:28.440 "max_aq_depth": 128, 00:19:28.440 "num_shared_buffers": 511, 00:19:28.440 "buf_cache_size": 4294967295, 00:19:28.440 "dif_insert_or_strip": false, 00:19:28.440 "zcopy": false, 00:19:28.440 "c2h_success": false, 00:19:28.440 "sock_priority": 0, 00:19:28.440 "abort_timeout_sec": 1, 00:19:28.440 "ack_timeout": 0, 00:19:28.440 "data_wr_pool_size": 0 00:19:28.440 } 00:19:28.440 }, 00:19:28.440 { 00:19:28.440 "method": "nvmf_create_subsystem", 00:19:28.440 "params": { 00:19:28.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.440 "allow_any_host": false, 00:19:28.440 "serial_number": "SPDK00000000000001", 00:19:28.440 "model_number": "SPDK bdev Controller", 00:19:28.440 "max_namespaces": 10, 00:19:28.440 "min_cntlid": 1, 00:19:28.440 "max_cntlid": 65519, 00:19:28.440 "ana_reporting": false 00:19:28.440 } 00:19:28.440 }, 00:19:28.440 { 00:19:28.440 "method": "nvmf_subsystem_add_host", 00:19:28.440 "params": { 00:19:28.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.440 "host": "nqn.2016-06.io.spdk:host1", 00:19:28.440 "psk": "/tmp/tmp.yTLVvHckaJ" 00:19:28.440 } 00:19:28.440 }, 00:19:28.440 { 00:19:28.440 "method": "nvmf_subsystem_add_ns", 00:19:28.440 "params": { 00:19:28.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.440 "namespace": { 00:19:28.440 "nsid": 1, 00:19:28.440 "bdev_name": "malloc0", 00:19:28.440 "nguid": "9ED2432F3B6942569870792BE8F1AAB0", 00:19:28.440 "uuid": "9ed2432f-3b69-4256-9870-792be8f1aab0", 00:19:28.440 "no_auto_visible": false 00:19:28.440 } 00:19:28.440 } 00:19:28.440 }, 00:19:28.440 { 00:19:28.440 "method": "nvmf_subsystem_add_listener", 00:19:28.440 "params": { 00:19:28.440 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.440 "listen_address": { 00:19:28.440 "trtype": "TCP", 00:19:28.440 "adrfam": "IPv4", 00:19:28.440 "traddr": "10.0.0.2", 00:19:28.440 "trsvcid": "4420" 00:19:28.440 }, 00:19:28.440 "secure_channel": true 00:19:28.440 } 00:19:28.440 } 00:19:28.440 ] 00:19:28.440 } 00:19:28.440 ] 00:19:28.440 }' 00:19:28.440 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:28.440 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.440 15:56:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=782432 00:19:28.440 15:56:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:28.440 15:56:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 782432 00:19:28.440 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 782432 ']' 00:19:28.440 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.440 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.440 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.440 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.440 15:56:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.440 [2024-07-12 15:56:25.589751] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:28.440 [2024-07-12 15:56:25.589854] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.440 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.440 [2024-07-12 15:56:25.652746] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.697 [2024-07-12 15:56:25.750871] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.697 [2024-07-12 15:56:25.750926] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.697 [2024-07-12 15:56:25.750948] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.697 [2024-07-12 15:56:25.750958] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.697 [2024-07-12 15:56:25.750968] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.697 [2024-07-12 15:56:25.751040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.697 [2024-07-12 15:56:25.976460] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.955 [2024-07-12 15:56:25.992454] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:28.955 [2024-07-12 15:56:26.008484] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:28.955 [2024-07-12 15:56:26.017947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=782585 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 782585 /var/tmp/bdevperf.sock 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 782585 ']' 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.521 15:56:26 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:29.521 "subsystems": [ 00:19:29.521 { 00:19:29.521 "subsystem": "keyring", 00:19:29.521 "config": [] 00:19:29.521 }, 00:19:29.521 { 00:19:29.521 "subsystem": "iobuf", 00:19:29.521 "config": [ 00:19:29.521 { 00:19:29.521 "method": "iobuf_set_options", 00:19:29.521 "params": { 00:19:29.521 "small_pool_count": 8192, 00:19:29.521 "large_pool_count": 1024, 00:19:29.521 "small_bufsize": 8192, 00:19:29.521 "large_bufsize": 135168 00:19:29.521 } 00:19:29.521 } 00:19:29.521 ] 00:19:29.521 }, 00:19:29.521 { 00:19:29.521 "subsystem": "sock", 00:19:29.521 "config": [ 00:19:29.521 { 00:19:29.521 "method": "sock_set_default_impl", 00:19:29.521 "params": { 00:19:29.521 "impl_name": "posix" 00:19:29.521 } 00:19:29.521 }, 00:19:29.521 { 00:19:29.521 "method": "sock_impl_set_options", 00:19:29.521 "params": { 00:19:29.521 "impl_name": "ssl", 00:19:29.521 "recv_buf_size": 4096, 00:19:29.521 "send_buf_size": 4096, 00:19:29.521 "enable_recv_pipe": true, 00:19:29.521 "enable_quickack": false, 00:19:29.521 "enable_placement_id": 0, 00:19:29.521 "enable_zerocopy_send_server": true, 00:19:29.521 "enable_zerocopy_send_client": false, 00:19:29.521 "zerocopy_threshold": 0, 00:19:29.521 "tls_version": 0, 00:19:29.521 "enable_ktls": false 00:19:29.521 } 00:19:29.521 }, 00:19:29.521 { 00:19:29.521 "method": "sock_impl_set_options", 00:19:29.521 "params": { 00:19:29.521 "impl_name": "posix", 00:19:29.521 "recv_buf_size": 2097152, 00:19:29.521 "send_buf_size": 2097152, 00:19:29.521 "enable_recv_pipe": true, 00:19:29.521 "enable_quickack": false, 00:19:29.521 "enable_placement_id": 0, 00:19:29.521 "enable_zerocopy_send_server": true, 00:19:29.521 "enable_zerocopy_send_client": false, 00:19:29.521 "zerocopy_threshold": 0, 00:19:29.521 "tls_version": 0, 00:19:29.521 "enable_ktls": false 00:19:29.521 } 00:19:29.521 } 00:19:29.521 ] 00:19:29.521 }, 00:19:29.521 { 00:19:29.521 "subsystem": "vmd", 00:19:29.521 "config": [] 00:19:29.521 }, 00:19:29.521 { 00:19:29.521 "subsystem": "accel", 00:19:29.521 "config": [ 00:19:29.521 { 00:19:29.521 "method": "accel_set_options", 00:19:29.521 "params": { 00:19:29.521 "small_cache_size": 128, 00:19:29.521 "large_cache_size": 16, 00:19:29.521 "task_count": 2048, 00:19:29.521 "sequence_count": 2048, 00:19:29.521 "buf_count": 2048 00:19:29.521 } 00:19:29.521 } 00:19:29.521 ] 00:19:29.521 }, 00:19:29.521 { 00:19:29.521 "subsystem": "bdev", 00:19:29.521 "config": [ 00:19:29.521 { 00:19:29.521 "method": "bdev_set_options", 00:19:29.521 "params": { 00:19:29.521 "bdev_io_pool_size": 65535, 00:19:29.521 "bdev_io_cache_size": 256, 00:19:29.521 "bdev_auto_examine": true, 00:19:29.521 "iobuf_small_cache_size": 128, 00:19:29.521 "iobuf_large_cache_size": 16 00:19:29.521 } 00:19:29.521 }, 00:19:29.521 { 00:19:29.521 "method": "bdev_raid_set_options", 00:19:29.521 "params": { 00:19:29.521 "process_window_size_kb": 1024 00:19:29.521 } 00:19:29.521 }, 00:19:29.521 { 00:19:29.521 "method": "bdev_iscsi_set_options", 00:19:29.521 "params": { 00:19:29.521 "timeout_sec": 30 00:19:29.521 } 00:19:29.521 }, 00:19:29.521 { 00:19:29.521 "method": "bdev_nvme_set_options", 00:19:29.521 "params": { 00:19:29.521 "action_on_timeout": "none", 00:19:29.521 "timeout_us": 0, 00:19:29.521 "timeout_admin_us": 0, 00:19:29.521 "keep_alive_timeout_ms": 10000, 00:19:29.521 "arbitration_burst": 0, 00:19:29.521 "low_priority_weight": 0, 00:19:29.521 "medium_priority_weight": 0, 00:19:29.521 "high_priority_weight": 0, 00:19:29.521 "nvme_adminq_poll_period_us": 10000, 00:19:29.521 "nvme_ioq_poll_period_us": 0, 00:19:29.521 "io_queue_requests": 512, 00:19:29.521 "delay_cmd_submit": true, 00:19:29.521 "transport_retry_count": 4, 00:19:29.521 "bdev_retry_count": 3, 00:19:29.521 "transport_ack_timeout": 0, 00:19:29.521 "ctrlr_loss_timeout_sec": 0, 00:19:29.521 "reconnect_delay_sec": 0, 00:19:29.521 "fast_io_fail_timeout_sec": 0, 00:19:29.521 "disable_auto_failback": false, 00:19:29.521 "generate_uuids": false, 00:19:29.521 "transport_tos": 0, 00:19:29.521 "nvme_error_stat": false, 00:19:29.521 "rdma_srq_size": 0, 00:19:29.521 "io_path_stat": false, 00:19:29.521 "allow_accel_sequence": false, 00:19:29.521 "rdma_max_cq_size": 0, 00:19:29.521 "rdma_cm_event_timeout_ms": 0, 00:19:29.521 "dhchap_digests": [ 00:19:29.521 "sha256", 00:19:29.521 "sha384", 00:19:29.521 "sha512" 00:19:29.521 ], 00:19:29.521 "dhchap_dhgroups": [ 00:19:29.521 "null", 00:19:29.521 "ffdhe2048", 00:19:29.521 "ffdhe3072", 00:19:29.521 "ffdhe4096", 00:19:29.521 "ffd 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:29.521 he6144", 00:19:29.521 "ffdhe8192" 00:19:29.521 ] 00:19:29.521 } 00:19:29.521 }, 00:19:29.521 { 00:19:29.521 "method": "bdev_nvme_attach_controller", 00:19:29.521 "params": { 00:19:29.522 "name": "TLSTEST", 00:19:29.522 "trtype": "TCP", 00:19:29.522 "adrfam": "IPv4", 00:19:29.522 "traddr": "10.0.0.2", 00:19:29.522 "trsvcid": "4420", 00:19:29.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.522 "prchk_reftag": false, 00:19:29.522 "prchk_guard": false, 00:19:29.522 "ctrlr_loss_timeout_sec": 0, 00:19:29.522 "reconnect_delay_sec": 0, 00:19:29.522 "fast_io_fail_timeout_sec": 0, 00:19:29.522 "psk": "/tmp/tmp.yTLVvHckaJ", 00:19:29.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.522 "hdgst": false, 00:19:29.522 "ddgst": false 00:19:29.522 } 00:19:29.522 }, 00:19:29.522 { 00:19:29.522 "method": "bdev_nvme_set_hotplug", 00:19:29.522 "params": { 00:19:29.522 "period_us": 100000, 00:19:29.522 "enable": false 00:19:29.522 } 00:19:29.522 }, 00:19:29.522 { 00:19:29.522 "method": "bdev_wait_for_examine" 00:19:29.522 } 00:19:29.522 ] 00:19:29.522 }, 00:19:29.522 { 00:19:29.522 "subsystem": "nbd", 00:19:29.522 "config": [] 00:19:29.522 } 00:19:29.522 ] 00:19:29.522 }' 00:19:29.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:29.522 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.522 15:56:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.522 [2024-07-12 15:56:26.629794] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:29.522 [2024-07-12 15:56:26.629883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782585 ] 00:19:29.522 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.522 [2024-07-12 15:56:26.687042] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.522 [2024-07-12 15:56:26.792999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.781 [2024-07-12 15:56:26.964222] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.781 [2024-07-12 15:56:26.964349] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:30.345 15:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.346 15:56:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:30.346 15:56:27 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:30.604 Running I/O for 10 seconds... 00:19:40.637 00:19:40.637 Latency(us) 00:19:40.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.637 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:40.637 Verification LBA range: start 0x0 length 0x2000 00:19:40.637 TLSTESTn1 : 10.02 3488.62 13.63 0.00 0.00 36633.48 6092.42 46797.56 00:19:40.637 =================================================================================================================== 00:19:40.637 Total : 3488.62 13.63 0.00 0.00 36633.48 6092.42 46797.56 00:19:40.637 0 00:19:40.637 15:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.637 15:56:37 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 782585 00:19:40.637 15:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 782585 ']' 00:19:40.637 15:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 782585 00:19:40.637 15:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:40.637 15:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.637 15:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782585 00:19:40.637 15:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:40.637 15:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:40.637 15:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782585' 00:19:40.637 killing process with pid 782585 00:19:40.637 15:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 782585 00:19:40.637 Received shutdown signal, test time was about 10.000000 seconds 00:19:40.637 00:19:40.637 Latency(us) 00:19:40.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.637 =================================================================================================================== 00:19:40.637 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.637 [2024-07-12 15:56:37.828335] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:40.637 15:56:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 782585 00:19:40.897 15:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 782432 00:19:40.897 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 782432 ']' 00:19:40.897 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 782432 00:19:40.897 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:40.897 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.897 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782432 00:19:40.897 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:40.897 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:40.897 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782432' 00:19:40.897 killing process with pid 782432 00:19:40.897 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 782432 00:19:40.897 [2024-07-12 15:56:38.121546] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:40.897 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 782432 00:19:41.154 15:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:41.154 15:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:41.154 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:41.154 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.154 15:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=783911 00:19:41.154 15:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:41.154 15:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 783911 00:19:41.154 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 783911 ']' 00:19:41.154 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.154 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.154 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.155 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.155 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.155 [2024-07-12 15:56:38.443029] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:41.155 [2024-07-12 15:56:38.443122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.412 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.412 [2024-07-12 15:56:38.513175] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.412 [2024-07-12 15:56:38.619431] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.412 [2024-07-12 15:56:38.619485] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.412 [2024-07-12 15:56:38.619507] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.412 [2024-07-12 15:56:38.619518] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.412 [2024-07-12 15:56:38.619528] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.412 [2024-07-12 15:56:38.619553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.670 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:41.670 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:41.670 15:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.670 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:41.670 15:56:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.670 15:56:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.670 15:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.yTLVvHckaJ 00:19:41.670 15:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.yTLVvHckaJ 00:19:41.670 15:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:41.930 [2024-07-12 15:56:38.975683] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.930 15:56:38 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:42.190 15:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:42.190 [2024-07-12 15:56:39.465032] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:42.190 [2024-07-12 15:56:39.465281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.190 15:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:42.450 malloc0 00:19:42.450 15:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:42.710 15:56:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yTLVvHckaJ 00:19:42.967 [2024-07-12 15:56:40.214743] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:42.967 15:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=784193 00:19:42.967 15:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:42.967 15:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 784193 /var/tmp/bdevperf.sock 00:19:42.967 15:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 784193 ']' 00:19:42.967 15:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:42.967 15:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:42.967 15:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:42.967 15:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:42.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:42.967 15:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:42.967 15:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.226 [2024-07-12 15:56:40.277034] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:43.226 [2024-07-12 15:56:40.277131] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784193 ] 00:19:43.226 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.227 [2024-07-12 15:56:40.335378] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.227 [2024-07-12 15:56:40.444655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.485 15:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.485 15:56:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:43.485 15:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yTLVvHckaJ 00:19:43.742 15:56:40 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:44.000 [2024-07-12 15:56:41.115035] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:44.000 nvme0n1 00:19:44.000 15:56:41 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:44.259 Running I/O for 1 seconds... 00:19:45.197 00:19:45.197 Latency(us) 00:19:45.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.197 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:45.197 Verification LBA range: start 0x0 length 0x2000 00:19:45.197 nvme0n1 : 1.02 3314.90 12.95 0.00 0.00 38228.21 6893.42 52817.16 00:19:45.197 =================================================================================================================== 00:19:45.197 Total : 3314.90 12.95 0.00 0.00 38228.21 6893.42 52817.16 00:19:45.197 0 00:19:45.197 15:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 784193 00:19:45.197 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 784193 ']' 00:19:45.197 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 784193 00:19:45.197 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:45.197 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.197 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 784193 00:19:45.197 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:45.197 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:45.197 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 784193' 00:19:45.197 killing process with pid 784193 00:19:45.197 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 784193 00:19:45.197 Received shutdown signal, test time was about 1.000000 seconds 00:19:45.197 00:19:45.197 Latency(us) 00:19:45.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.197 =================================================================================================================== 00:19:45.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:45.197 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 784193 00:19:45.456 15:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 783911 00:19:45.456 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 783911 ']' 00:19:45.457 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 783911 00:19:45.457 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:45.457 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:45.457 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 783911 00:19:45.457 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:45.457 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:45.457 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 783911' 00:19:45.457 killing process with pid 783911 00:19:45.457 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 783911 00:19:45.457 [2024-07-12 15:56:42.684478] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:45.457 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 783911 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=784475 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 784475 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 784475 ']' 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.716 15:56:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.974 [2024-07-12 15:56:43.015514] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:45.974 [2024-07-12 15:56:43.015596] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.974 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.974 [2024-07-12 15:56:43.080479] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.974 [2024-07-12 15:56:43.190473] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.974 [2024-07-12 15:56:43.190543] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.974 [2024-07-12 15:56:43.190556] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.974 [2024-07-12 15:56:43.190567] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.974 [2024-07-12 15:56:43.190576] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.974 [2024-07-12 15:56:43.190602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.231 [2024-07-12 15:56:43.327503] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.231 malloc0 00:19:46.231 [2024-07-12 15:56:43.358470] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.231 [2024-07-12 15:56:43.358683] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=784619 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:46.231 15:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 784619 /var/tmp/bdevperf.sock 00:19:46.232 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 784619 ']' 00:19:46.232 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.232 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.232 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.232 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.232 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.232 [2024-07-12 15:56:43.431119] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:46.232 [2024-07-12 15:56:43.431209] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784619 ] 00:19:46.232 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.232 [2024-07-12 15:56:43.491020] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.489 [2024-07-12 15:56:43.598869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.489 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.489 15:56:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:46.489 15:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yTLVvHckaJ 00:19:46.747 15:56:43 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:47.007 [2024-07-12 15:56:44.148559] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.007 nvme0n1 00:19:47.007 15:56:44 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:47.302 Running I/O for 1 seconds... 00:19:48.275 00:19:48.275 Latency(us) 00:19:48.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.275 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:48.275 Verification LBA range: start 0x0 length 0x2000 00:19:48.275 nvme0n1 : 1.02 3539.90 13.83 0.00 0.00 35786.70 5995.33 39807.05 00:19:48.275 =================================================================================================================== 00:19:48.275 Total : 3539.90 13.83 0.00 0.00 35786.70 5995.33 39807.05 00:19:48.275 0 00:19:48.275 15:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:19:48.275 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.275 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.275 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.275 15:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:19:48.275 "subsystems": [ 00:19:48.275 { 00:19:48.275 "subsystem": "keyring", 00:19:48.275 "config": [ 00:19:48.275 { 00:19:48.275 "method": "keyring_file_add_key", 00:19:48.275 "params": { 00:19:48.275 "name": "key0", 00:19:48.275 "path": "/tmp/tmp.yTLVvHckaJ" 00:19:48.275 } 00:19:48.275 } 00:19:48.275 ] 00:19:48.275 }, 00:19:48.275 { 00:19:48.275 "subsystem": "iobuf", 00:19:48.275 "config": [ 00:19:48.275 { 00:19:48.275 "method": "iobuf_set_options", 00:19:48.275 "params": { 00:19:48.275 "small_pool_count": 8192, 00:19:48.275 "large_pool_count": 1024, 00:19:48.275 "small_bufsize": 8192, 00:19:48.275 "large_bufsize": 135168 00:19:48.275 } 00:19:48.275 } 00:19:48.275 ] 00:19:48.275 }, 00:19:48.275 { 00:19:48.275 "subsystem": "sock", 00:19:48.275 "config": [ 00:19:48.275 { 00:19:48.275 "method": "sock_set_default_impl", 00:19:48.275 "params": { 00:19:48.275 "impl_name": "posix" 00:19:48.275 } 00:19:48.275 }, 00:19:48.275 { 00:19:48.275 "method": "sock_impl_set_options", 00:19:48.275 "params": { 00:19:48.275 "impl_name": "ssl", 00:19:48.275 "recv_buf_size": 4096, 00:19:48.275 "send_buf_size": 4096, 00:19:48.275 "enable_recv_pipe": true, 00:19:48.275 "enable_quickack": false, 00:19:48.275 "enable_placement_id": 0, 00:19:48.275 "enable_zerocopy_send_server": true, 00:19:48.275 "enable_zerocopy_send_client": false, 00:19:48.275 "zerocopy_threshold": 0, 00:19:48.275 "tls_version": 0, 00:19:48.275 "enable_ktls": false 00:19:48.275 } 00:19:48.275 }, 00:19:48.275 { 00:19:48.275 "method": "sock_impl_set_options", 00:19:48.275 "params": { 00:19:48.275 "impl_name": "posix", 00:19:48.275 "recv_buf_size": 2097152, 00:19:48.275 "send_buf_size": 2097152, 00:19:48.275 "enable_recv_pipe": true, 00:19:48.275 "enable_quickack": false, 00:19:48.275 "enable_placement_id": 0, 00:19:48.275 "enable_zerocopy_send_server": true, 00:19:48.275 "enable_zerocopy_send_client": false, 00:19:48.275 "zerocopy_threshold": 0, 00:19:48.275 "tls_version": 0, 00:19:48.275 "enable_ktls": false 00:19:48.275 } 00:19:48.275 } 00:19:48.275 ] 00:19:48.275 }, 00:19:48.275 { 00:19:48.275 "subsystem": "vmd", 00:19:48.275 "config": [] 00:19:48.275 }, 00:19:48.275 { 00:19:48.275 "subsystem": "accel", 00:19:48.275 "config": [ 00:19:48.275 { 00:19:48.275 "method": "accel_set_options", 00:19:48.275 "params": { 00:19:48.275 "small_cache_size": 128, 00:19:48.275 "large_cache_size": 16, 00:19:48.275 "task_count": 2048, 00:19:48.275 "sequence_count": 2048, 00:19:48.275 "buf_count": 2048 00:19:48.275 } 00:19:48.275 } 00:19:48.275 ] 00:19:48.275 }, 00:19:48.275 { 00:19:48.275 "subsystem": "bdev", 00:19:48.275 "config": [ 00:19:48.275 { 00:19:48.275 "method": "bdev_set_options", 00:19:48.275 "params": { 00:19:48.275 "bdev_io_pool_size": 65535, 00:19:48.275 "bdev_io_cache_size": 256, 00:19:48.275 "bdev_auto_examine": true, 00:19:48.275 "iobuf_small_cache_size": 128, 00:19:48.276 "iobuf_large_cache_size": 16 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "bdev_raid_set_options", 00:19:48.276 "params": { 00:19:48.276 "process_window_size_kb": 1024 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "bdev_iscsi_set_options", 00:19:48.276 "params": { 00:19:48.276 "timeout_sec": 30 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "bdev_nvme_set_options", 00:19:48.276 "params": { 00:19:48.276 "action_on_timeout": "none", 00:19:48.276 "timeout_us": 0, 00:19:48.276 "timeout_admin_us": 0, 00:19:48.276 "keep_alive_timeout_ms": 10000, 00:19:48.276 "arbitration_burst": 0, 00:19:48.276 "low_priority_weight": 0, 00:19:48.276 "medium_priority_weight": 0, 00:19:48.276 "high_priority_weight": 0, 00:19:48.276 "nvme_adminq_poll_period_us": 10000, 00:19:48.276 "nvme_ioq_poll_period_us": 0, 00:19:48.276 "io_queue_requests": 0, 00:19:48.276 "delay_cmd_submit": true, 00:19:48.276 "transport_retry_count": 4, 00:19:48.276 "bdev_retry_count": 3, 00:19:48.276 "transport_ack_timeout": 0, 00:19:48.276 "ctrlr_loss_timeout_sec": 0, 00:19:48.276 "reconnect_delay_sec": 0, 00:19:48.276 "fast_io_fail_timeout_sec": 0, 00:19:48.276 "disable_auto_failback": false, 00:19:48.276 "generate_uuids": false, 00:19:48.276 "transport_tos": 0, 00:19:48.276 "nvme_error_stat": false, 00:19:48.276 "rdma_srq_size": 0, 00:19:48.276 "io_path_stat": false, 00:19:48.276 "allow_accel_sequence": false, 00:19:48.276 "rdma_max_cq_size": 0, 00:19:48.276 "rdma_cm_event_timeout_ms": 0, 00:19:48.276 "dhchap_digests": [ 00:19:48.276 "sha256", 00:19:48.276 "sha384", 00:19:48.276 "sha512" 00:19:48.276 ], 00:19:48.276 "dhchap_dhgroups": [ 00:19:48.276 "null", 00:19:48.276 "ffdhe2048", 00:19:48.276 "ffdhe3072", 00:19:48.276 "ffdhe4096", 00:19:48.276 "ffdhe6144", 00:19:48.276 "ffdhe8192" 00:19:48.276 ] 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "bdev_nvme_set_hotplug", 00:19:48.276 "params": { 00:19:48.276 "period_us": 100000, 00:19:48.276 "enable": false 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "bdev_malloc_create", 00:19:48.276 "params": { 00:19:48.276 "name": "malloc0", 00:19:48.276 "num_blocks": 8192, 00:19:48.276 "block_size": 4096, 00:19:48.276 "physical_block_size": 4096, 00:19:48.276 "uuid": "8b2c2173-a639-4c9c-8960-601df8161444", 00:19:48.276 "optimal_io_boundary": 0 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "bdev_wait_for_examine" 00:19:48.276 } 00:19:48.276 ] 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "subsystem": "nbd", 00:19:48.276 "config": [] 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "subsystem": "scheduler", 00:19:48.276 "config": [ 00:19:48.276 { 00:19:48.276 "method": "framework_set_scheduler", 00:19:48.276 "params": { 00:19:48.276 "name": "static" 00:19:48.276 } 00:19:48.276 } 00:19:48.276 ] 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "subsystem": "nvmf", 00:19:48.276 "config": [ 00:19:48.276 { 00:19:48.276 "method": "nvmf_set_config", 00:19:48.276 "params": { 00:19:48.276 "discovery_filter": "match_any", 00:19:48.276 "admin_cmd_passthru": { 00:19:48.276 "identify_ctrlr": false 00:19:48.276 } 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "nvmf_set_max_subsystems", 00:19:48.276 "params": { 00:19:48.276 "max_subsystems": 1024 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "nvmf_set_crdt", 00:19:48.276 "params": { 00:19:48.276 "crdt1": 0, 00:19:48.276 "crdt2": 0, 00:19:48.276 "crdt3": 0 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "nvmf_create_transport", 00:19:48.276 "params": { 00:19:48.276 "trtype": "TCP", 00:19:48.276 "max_queue_depth": 128, 00:19:48.276 "max_io_qpairs_per_ctrlr": 127, 00:19:48.276 "in_capsule_data_size": 4096, 00:19:48.276 "max_io_size": 131072, 00:19:48.276 "io_unit_size": 131072, 00:19:48.276 "max_aq_depth": 128, 00:19:48.276 "num_shared_buffers": 511, 00:19:48.276 "buf_cache_size": 4294967295, 00:19:48.276 "dif_insert_or_strip": false, 00:19:48.276 "zcopy": false, 00:19:48.276 "c2h_success": false, 00:19:48.276 "sock_priority": 0, 00:19:48.276 "abort_timeout_sec": 1, 00:19:48.276 "ack_timeout": 0, 00:19:48.276 "data_wr_pool_size": 0 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "nvmf_create_subsystem", 00:19:48.276 "params": { 00:19:48.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.276 "allow_any_host": false, 00:19:48.276 "serial_number": "00000000000000000000", 00:19:48.276 "model_number": "SPDK bdev Controller", 00:19:48.276 "max_namespaces": 32, 00:19:48.276 "min_cntlid": 1, 00:19:48.276 "max_cntlid": 65519, 00:19:48.276 "ana_reporting": false 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "nvmf_subsystem_add_host", 00:19:48.276 "params": { 00:19:48.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.276 "host": "nqn.2016-06.io.spdk:host1", 00:19:48.276 "psk": "key0" 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "nvmf_subsystem_add_ns", 00:19:48.276 "params": { 00:19:48.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.276 "namespace": { 00:19:48.276 "nsid": 1, 00:19:48.276 "bdev_name": "malloc0", 00:19:48.276 "nguid": "8B2C2173A6394C9C8960601DF8161444", 00:19:48.276 "uuid": "8b2c2173-a639-4c9c-8960-601df8161444", 00:19:48.276 "no_auto_visible": false 00:19:48.276 } 00:19:48.276 } 00:19:48.276 }, 00:19:48.276 { 00:19:48.276 "method": "nvmf_subsystem_add_listener", 00:19:48.276 "params": { 00:19:48.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.276 "listen_address": { 00:19:48.276 "trtype": "TCP", 00:19:48.276 "adrfam": "IPv4", 00:19:48.276 "traddr": "10.0.0.2", 00:19:48.276 "trsvcid": "4420" 00:19:48.276 }, 00:19:48.276 "secure_channel": true 00:19:48.276 } 00:19:48.276 } 00:19:48.276 ] 00:19:48.276 } 00:19:48.276 ] 00:19:48.276 }' 00:19:48.276 15:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:48.535 15:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:19:48.535 "subsystems": [ 00:19:48.535 { 00:19:48.535 "subsystem": "keyring", 00:19:48.535 "config": [ 00:19:48.535 { 00:19:48.535 "method": "keyring_file_add_key", 00:19:48.535 "params": { 00:19:48.535 "name": "key0", 00:19:48.535 "path": "/tmp/tmp.yTLVvHckaJ" 00:19:48.535 } 00:19:48.535 } 00:19:48.535 ] 00:19:48.535 }, 00:19:48.535 { 00:19:48.535 "subsystem": "iobuf", 00:19:48.535 "config": [ 00:19:48.535 { 00:19:48.535 "method": "iobuf_set_options", 00:19:48.535 "params": { 00:19:48.535 "small_pool_count": 8192, 00:19:48.535 "large_pool_count": 1024, 00:19:48.535 "small_bufsize": 8192, 00:19:48.535 "large_bufsize": 135168 00:19:48.535 } 00:19:48.535 } 00:19:48.535 ] 00:19:48.535 }, 00:19:48.535 { 00:19:48.535 "subsystem": "sock", 00:19:48.535 "config": [ 00:19:48.535 { 00:19:48.535 "method": "sock_set_default_impl", 00:19:48.536 "params": { 00:19:48.536 "impl_name": "posix" 00:19:48.536 } 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "method": "sock_impl_set_options", 00:19:48.536 "params": { 00:19:48.536 "impl_name": "ssl", 00:19:48.536 "recv_buf_size": 4096, 00:19:48.536 "send_buf_size": 4096, 00:19:48.536 "enable_recv_pipe": true, 00:19:48.536 "enable_quickack": false, 00:19:48.536 "enable_placement_id": 0, 00:19:48.536 "enable_zerocopy_send_server": true, 00:19:48.536 "enable_zerocopy_send_client": false, 00:19:48.536 "zerocopy_threshold": 0, 00:19:48.536 "tls_version": 0, 00:19:48.536 "enable_ktls": false 00:19:48.536 } 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "method": "sock_impl_set_options", 00:19:48.536 "params": { 00:19:48.536 "impl_name": "posix", 00:19:48.536 "recv_buf_size": 2097152, 00:19:48.536 "send_buf_size": 2097152, 00:19:48.536 "enable_recv_pipe": true, 00:19:48.536 "enable_quickack": false, 00:19:48.536 "enable_placement_id": 0, 00:19:48.536 "enable_zerocopy_send_server": true, 00:19:48.536 "enable_zerocopy_send_client": false, 00:19:48.536 "zerocopy_threshold": 0, 00:19:48.536 "tls_version": 0, 00:19:48.536 "enable_ktls": false 00:19:48.536 } 00:19:48.536 } 00:19:48.536 ] 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "subsystem": "vmd", 00:19:48.536 "config": [] 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "subsystem": "accel", 00:19:48.536 "config": [ 00:19:48.536 { 00:19:48.536 "method": "accel_set_options", 00:19:48.536 "params": { 00:19:48.536 "small_cache_size": 128, 00:19:48.536 "large_cache_size": 16, 00:19:48.536 "task_count": 2048, 00:19:48.536 "sequence_count": 2048, 00:19:48.536 "buf_count": 2048 00:19:48.536 } 00:19:48.536 } 00:19:48.536 ] 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "subsystem": "bdev", 00:19:48.536 "config": [ 00:19:48.536 { 00:19:48.536 "method": "bdev_set_options", 00:19:48.536 "params": { 00:19:48.536 "bdev_io_pool_size": 65535, 00:19:48.536 "bdev_io_cache_size": 256, 00:19:48.536 "bdev_auto_examine": true, 00:19:48.536 "iobuf_small_cache_size": 128, 00:19:48.536 "iobuf_large_cache_size": 16 00:19:48.536 } 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "method": "bdev_raid_set_options", 00:19:48.536 "params": { 00:19:48.536 "process_window_size_kb": 1024 00:19:48.536 } 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "method": "bdev_iscsi_set_options", 00:19:48.536 "params": { 00:19:48.536 "timeout_sec": 30 00:19:48.536 } 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "method": "bdev_nvme_set_options", 00:19:48.536 "params": { 00:19:48.536 "action_on_timeout": "none", 00:19:48.536 "timeout_us": 0, 00:19:48.536 "timeout_admin_us": 0, 00:19:48.536 "keep_alive_timeout_ms": 10000, 00:19:48.536 "arbitration_burst": 0, 00:19:48.536 "low_priority_weight": 0, 00:19:48.536 "medium_priority_weight": 0, 00:19:48.536 "high_priority_weight": 0, 00:19:48.536 "nvme_adminq_poll_period_us": 10000, 00:19:48.536 "nvme_ioq_poll_period_us": 0, 00:19:48.536 "io_queue_requests": 512, 00:19:48.536 "delay_cmd_submit": true, 00:19:48.536 "transport_retry_count": 4, 00:19:48.536 "bdev_retry_count": 3, 00:19:48.536 "transport_ack_timeout": 0, 00:19:48.536 "ctrlr_loss_timeout_sec": 0, 00:19:48.536 "reconnect_delay_sec": 0, 00:19:48.536 "fast_io_fail_timeout_sec": 0, 00:19:48.536 "disable_auto_failback": false, 00:19:48.536 "generate_uuids": false, 00:19:48.536 "transport_tos": 0, 00:19:48.536 "nvme_error_stat": false, 00:19:48.536 "rdma_srq_size": 0, 00:19:48.536 "io_path_stat": false, 00:19:48.536 "allow_accel_sequence": false, 00:19:48.536 "rdma_max_cq_size": 0, 00:19:48.536 "rdma_cm_event_timeout_ms": 0, 00:19:48.536 "dhchap_digests": [ 00:19:48.536 "sha256", 00:19:48.536 "sha384", 00:19:48.536 "sha512" 00:19:48.536 ], 00:19:48.536 "dhchap_dhgroups": [ 00:19:48.536 "null", 00:19:48.536 "ffdhe2048", 00:19:48.536 "ffdhe3072", 00:19:48.536 "ffdhe4096", 00:19:48.536 "ffdhe6144", 00:19:48.536 "ffdhe8192" 00:19:48.536 ] 00:19:48.536 } 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "method": "bdev_nvme_attach_controller", 00:19:48.536 "params": { 00:19:48.536 "name": "nvme0", 00:19:48.536 "trtype": "TCP", 00:19:48.536 "adrfam": "IPv4", 00:19:48.536 "traddr": "10.0.0.2", 00:19:48.536 "trsvcid": "4420", 00:19:48.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:48.536 "prchk_reftag": false, 00:19:48.536 "prchk_guard": false, 00:19:48.536 "ctrlr_loss_timeout_sec": 0, 00:19:48.536 "reconnect_delay_sec": 0, 00:19:48.536 "fast_io_fail_timeout_sec": 0, 00:19:48.536 "psk": "key0", 00:19:48.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:48.536 "hdgst": false, 00:19:48.536 "ddgst": false 00:19:48.536 } 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "method": "bdev_nvme_set_hotplug", 00:19:48.536 "params": { 00:19:48.536 "period_us": 100000, 00:19:48.536 "enable": false 00:19:48.536 } 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "method": "bdev_enable_histogram", 00:19:48.536 "params": { 00:19:48.536 "name": "nvme0n1", 00:19:48.536 "enable": true 00:19:48.536 } 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "method": "bdev_wait_for_examine" 00:19:48.536 } 00:19:48.536 ] 00:19:48.536 }, 00:19:48.536 { 00:19:48.536 "subsystem": "nbd", 00:19:48.536 "config": [] 00:19:48.536 } 00:19:48.536 ] 00:19:48.536 }' 00:19:48.536 15:56:45 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 784619 00:19:48.536 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 784619 ']' 00:19:48.536 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 784619 00:19:48.536 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:48.536 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.536 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 784619 00:19:48.795 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:48.795 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:48.795 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 784619' 00:19:48.795 killing process with pid 784619 00:19:48.795 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 784619 00:19:48.795 Received shutdown signal, test time was about 1.000000 seconds 00:19:48.795 00:19:48.795 Latency(us) 00:19:48.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.795 =================================================================================================================== 00:19:48.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.795 15:56:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 784619 00:19:49.052 15:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 784475 00:19:49.052 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 784475 ']' 00:19:49.052 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 784475 00:19:49.052 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:49.052 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:49.052 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 784475 00:19:49.052 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:49.052 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:49.052 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 784475' 00:19:49.052 killing process with pid 784475 00:19:49.053 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 784475 00:19:49.053 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 784475 00:19:49.310 15:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:19:49.310 15:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.310 15:56:46 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:19:49.310 "subsystems": [ 00:19:49.310 { 00:19:49.310 "subsystem": "keyring", 00:19:49.310 "config": [ 00:19:49.310 { 00:19:49.310 "method": "keyring_file_add_key", 00:19:49.310 "params": { 00:19:49.310 "name": "key0", 00:19:49.310 "path": "/tmp/tmp.yTLVvHckaJ" 00:19:49.310 } 00:19:49.310 } 00:19:49.310 ] 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "subsystem": "iobuf", 00:19:49.310 "config": [ 00:19:49.310 { 00:19:49.310 "method": "iobuf_set_options", 00:19:49.310 "params": { 00:19:49.310 "small_pool_count": 8192, 00:19:49.310 "large_pool_count": 1024, 00:19:49.310 "small_bufsize": 8192, 00:19:49.310 "large_bufsize": 135168 00:19:49.310 } 00:19:49.310 } 00:19:49.310 ] 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "subsystem": "sock", 00:19:49.310 "config": [ 00:19:49.310 { 00:19:49.310 "method": "sock_set_default_impl", 00:19:49.310 "params": { 00:19:49.310 "impl_name": "posix" 00:19:49.310 } 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "method": "sock_impl_set_options", 00:19:49.310 "params": { 00:19:49.310 "impl_name": "ssl", 00:19:49.310 "recv_buf_size": 4096, 00:19:49.310 "send_buf_size": 4096, 00:19:49.310 "enable_recv_pipe": true, 00:19:49.310 "enable_quickack": false, 00:19:49.310 "enable_placement_id": 0, 00:19:49.310 "enable_zerocopy_send_server": true, 00:19:49.310 "enable_zerocopy_send_client": false, 00:19:49.310 "zerocopy_threshold": 0, 00:19:49.310 "tls_version": 0, 00:19:49.310 "enable_ktls": false 00:19:49.310 } 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "method": "sock_impl_set_options", 00:19:49.310 "params": { 00:19:49.310 "impl_name": "posix", 00:19:49.310 "recv_buf_size": 2097152, 00:19:49.310 "send_buf_size": 2097152, 00:19:49.310 "enable_recv_pipe": true, 00:19:49.310 "enable_quickack": false, 00:19:49.310 "enable_placement_id": 0, 00:19:49.310 "enable_zerocopy_send_server": true, 00:19:49.310 "enable_zerocopy_send_client": false, 00:19:49.310 "zerocopy_threshold": 0, 00:19:49.310 "tls_version": 0, 00:19:49.310 "enable_ktls": false 00:19:49.310 } 00:19:49.310 } 00:19:49.310 ] 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "subsystem": "vmd", 00:19:49.310 "config": [] 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "subsystem": "accel", 00:19:49.310 "config": [ 00:19:49.310 { 00:19:49.310 "method": "accel_set_options", 00:19:49.310 "params": { 00:19:49.310 "small_cache_size": 128, 00:19:49.310 "large_cache_size": 16, 00:19:49.310 "task_count": 2048, 00:19:49.310 "sequence_count": 2048, 00:19:49.310 "buf_count": 2048 00:19:49.310 } 00:19:49.310 } 00:19:49.310 ] 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "subsystem": "bdev", 00:19:49.310 "config": [ 00:19:49.310 { 00:19:49.310 "method": "bdev_set_options", 00:19:49.310 "params": { 00:19:49.310 "bdev_io_pool_size": 65535, 00:19:49.310 "bdev_io_cache_size": 256, 00:19:49.310 "bdev_auto_examine": true, 00:19:49.310 "iobuf_small_cache_size": 128, 00:19:49.310 "iobuf_large_cache_size": 16 00:19:49.310 } 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "method": "bdev_raid_set_options", 00:19:49.310 "params": { 00:19:49.310 "process_window_size_kb": 1024 00:19:49.310 } 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "method": "bdev_iscsi_set_options", 00:19:49.310 "params": { 00:19:49.310 "timeout_sec": 30 00:19:49.310 } 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "method": "bdev_nvme_set_options", 00:19:49.310 "params": { 00:19:49.310 "action_on_timeout": "none", 00:19:49.310 "timeout_us": 0, 00:19:49.310 "timeout_admin_us": 0, 00:19:49.310 "keep_alive_timeout_ms": 10000, 00:19:49.310 "arbitration_burst": 0, 00:19:49.310 "low_priority_weight": 0, 00:19:49.310 "medium_priority_weight": 0, 00:19:49.310 "high_priority_weight": 0, 00:19:49.310 "nvme_adminq_poll_period_us": 10000, 00:19:49.310 "nvme_ioq_poll_period_us": 0, 00:19:49.310 "io_queue_requests": 0, 00:19:49.310 "delay_cmd_submit": true, 00:19:49.310 "transport_retry_count": 4, 00:19:49.310 "bdev_retry_count": 3, 00:19:49.310 "transport_ack_timeout": 0, 00:19:49.310 "ctrlr_loss_timeout_sec": 0, 00:19:49.310 "reconnect_delay_sec": 0, 00:19:49.310 "fast_io_fail_timeout_sec": 0, 00:19:49.310 "disable_auto_failback": false, 00:19:49.310 "generate_uuids": false, 00:19:49.310 "transport_tos": 0, 00:19:49.310 "nvme_error_stat": false, 00:19:49.310 "rdma_srq_size": 0, 00:19:49.310 "io_path_stat": false, 00:19:49.310 "allow_accel_sequence": false, 00:19:49.310 "rdma_max_cq_size": 0, 00:19:49.310 "rdma_cm_event_timeout_ms": 0, 00:19:49.310 "dhchap_digests": [ 00:19:49.310 "sha256", 00:19:49.310 "sha384", 00:19:49.310 "sha512" 00:19:49.310 ], 00:19:49.310 "dhchap_dhgroups": [ 00:19:49.310 "null", 00:19:49.310 "ffdhe2048", 00:19:49.310 "ffdhe3072", 00:19:49.310 "ffdhe4096", 00:19:49.310 "ffdhe6144", 00:19:49.310 "ffdhe8192" 00:19:49.310 ] 00:19:49.310 } 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "method": "bdev_nvme_set_hotplug", 00:19:49.310 "params": { 00:19:49.310 "period_us": 100000, 00:19:49.310 "enable": false 00:19:49.310 } 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "method": "bdev_malloc_create", 00:19:49.310 "params": { 00:19:49.310 "name": "malloc0", 00:19:49.310 "num_blocks": 8192, 00:19:49.310 "block_size": 4096, 00:19:49.310 "physical_block_size": 4096, 00:19:49.310 "uuid": "8b2c2173-a639-4c9c-8960-601df8161444", 00:19:49.310 "optimal_io_boundary": 0 00:19:49.310 } 00:19:49.310 }, 00:19:49.310 { 00:19:49.310 "method": "bdev_wait_for_examine" 00:19:49.310 } 00:19:49.311 ] 00:19:49.311 }, 00:19:49.311 { 00:19:49.311 "subsystem": "nbd", 00:19:49.311 "config": [] 00:19:49.311 }, 00:19:49.311 { 00:19:49.311 "subsystem": "scheduler", 00:19:49.311 "config": [ 00:19:49.311 { 00:19:49.311 "method": "framework_set_scheduler", 00:19:49.311 "params": { 00:19:49.311 "name": "static" 00:19:49.311 } 00:19:49.311 } 00:19:49.311 ] 00:19:49.311 }, 00:19:49.311 { 00:19:49.311 "subsystem": "nvmf", 00:19:49.311 "config": [ 00:19:49.311 { 00:19:49.311 "method": "nvmf_set_config", 00:19:49.311 "params": { 00:19:49.311 "discovery_filter": "match_any", 00:19:49.311 "admin_cmd_passthru": { 00:19:49.311 "identify_ctrlr": false 00:19:49.311 } 00:19:49.311 } 00:19:49.311 }, 00:19:49.311 { 00:19:49.311 "method": "nvmf_set_max_subsystems", 00:19:49.311 "params": { 00:19:49.311 "max_subsystems": 1024 00:19:49.311 } 00:19:49.311 }, 00:19:49.311 { 00:19:49.311 "method": "nvmf_set_crdt", 00:19:49.311 "params": { 00:19:49.311 "crdt1": 0, 00:19:49.311 "crdt2": 0, 00:19:49.311 "crdt3": 0 00:19:49.311 } 00:19:49.311 }, 00:19:49.311 { 00:19:49.311 "method": "nvmf_create_transport", 00:19:49.311 "params": { 00:19:49.311 "trtype": "TCP", 00:19:49.311 "max_queue_depth": 128, 00:19:49.311 "max_io_qpairs_per_ctrlr": 127, 00:19:49.311 "in_capsule_data_size": 4096, 00:19:49.311 "max_io_size": 131072, 00:19:49.311 "io_unit_size": 131072, 00:19:49.311 "max_aq_depth": 128, 00:19:49.311 "num_shared_buffers": 511, 00:19:49.311 "buf_cache_size": 4294967295, 00:19:49.311 "dif_insert_or_strip": false, 00:19:49.311 "zcopy": false, 00:19:49.311 "c2h_success": false, 00:19:49.311 "sock_priority": 0, 00:19:49.311 "abort_timeout_sec": 1, 00:19:49.311 "ack_timeout": 0, 00:19:49.311 "data_wr_pool_size": 0 00:19:49.311 } 00:19:49.311 }, 00:19:49.311 { 00:19:49.311 "method": "nvmf_create_subsystem", 00:19:49.311 "params": { 00:19:49.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.311 "allow_any_host": false, 00:19:49.311 "serial_number": "00000000000000000000", 00:19:49.311 "model_number": "SPDK bdev Controller", 00:19:49.311 "max_namespaces": 32, 00:19:49.311 "min_cntlid": 1, 00:19:49.311 "max_cntlid": 65519, 00:19:49.311 "ana_reporting": false 00:19:49.311 } 00:19:49.311 }, 00:19:49.311 { 00:19:49.311 "method": "nvmf_subsystem_add_host", 00:19:49.311 "params": { 00:19:49.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.311 "host": "nqn.2016-06.io.spdk:host1", 00:19:49.311 "psk": "key0" 00:19:49.311 } 00:19:49.311 }, 00:19:49.311 { 00:19:49.311 "method": "nvmf_subsystem_add_ns", 00:19:49.311 "params": { 00:19:49.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.311 "namespace": { 00:19:49.311 "nsid": 1, 00:19:49.311 "bdev_name": "malloc0", 00:19:49.311 "nguid": "8B2C2173A6394C9C8960601DF8161444", 00:19:49.311 "uuid": "8b2c2173-a639-4c9c-8960-601df8161444", 00:19:49.311 "no_auto_visible": false 00:19:49.311 } 00:19:49.311 } 00:19:49.311 }, 00:19:49.311 { 00:19:49.311 "method": "nvmf_subsystem_add_listener", 00:19:49.311 "params": { 00:19:49.311 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.311 "listen_address": { 00:19:49.311 "trtype": "TCP", 00:19:49.311 "adrfam": "IPv4", 00:19:49.311 "traddr": "10.0.0.2", 00:19:49.311 "trsvcid": "4420" 00:19:49.311 }, 00:19:49.311 "secure_channel": true 00:19:49.311 } 00:19:49.311 } 00:19:49.311 ] 00:19:49.311 } 00:19:49.311 ] 00:19:49.311 }' 00:19:49.311 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.311 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.311 15:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=784909 00:19:49.311 15:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:49.311 15:56:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 784909 00:19:49.311 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 784909 ']' 00:19:49.311 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.311 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.311 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.311 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.311 15:56:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.311 [2024-07-12 15:56:46.479726] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:49.311 [2024-07-12 15:56:46.479840] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.311 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.311 [2024-07-12 15:56:46.544150] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.567 [2024-07-12 15:56:46.650154] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.567 [2024-07-12 15:56:46.650204] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.567 [2024-07-12 15:56:46.650229] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.567 [2024-07-12 15:56:46.650241] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.567 [2024-07-12 15:56:46.650251] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.567 [2024-07-12 15:56:46.650329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.824 [2024-07-12 15:56:46.878191] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.824 [2024-07-12 15:56:46.910177] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.824 [2024-07-12 15:56:46.918921] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=785063 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 785063 /var/tmp/bdevperf.sock 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 785063 ']' 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:19:50.389 "subsystems": [ 00:19:50.389 { 00:19:50.389 "subsystem": "keyring", 00:19:50.389 "config": [ 00:19:50.389 { 00:19:50.389 "method": "keyring_file_add_key", 00:19:50.389 "params": { 00:19:50.389 "name": "key0", 00:19:50.389 "path": "/tmp/tmp.yTLVvHckaJ" 00:19:50.389 } 00:19:50.389 } 00:19:50.389 ] 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "subsystem": "iobuf", 00:19:50.389 "config": [ 00:19:50.389 { 00:19:50.389 "method": "iobuf_set_options", 00:19:50.389 "params": { 00:19:50.389 "small_pool_count": 8192, 00:19:50.389 "large_pool_count": 1024, 00:19:50.389 "small_bufsize": 8192, 00:19:50.389 "large_bufsize": 135168 00:19:50.389 } 00:19:50.389 } 00:19:50.389 ] 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "subsystem": "sock", 00:19:50.389 "config": [ 00:19:50.389 { 00:19:50.389 "method": "sock_set_default_impl", 00:19:50.389 "params": { 00:19:50.389 "impl_name": "posix" 00:19:50.389 } 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "method": "sock_impl_set_options", 00:19:50.389 "params": { 00:19:50.389 "impl_name": "ssl", 00:19:50.389 "recv_buf_size": 4096, 00:19:50.389 "send_buf_size": 4096, 00:19:50.389 "enable_recv_pipe": true, 00:19:50.389 "enable_quickack": false, 00:19:50.389 "enable_placement_id": 0, 00:19:50.389 "enable_zerocopy_send_server": true, 00:19:50.389 "enable_zerocopy_send_client": false, 00:19:50.389 "zerocopy_threshold": 0, 00:19:50.389 "tls_version": 0, 00:19:50.389 "enable_ktls": false 00:19:50.389 } 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "method": "sock_impl_set_options", 00:19:50.389 "params": { 00:19:50.389 "impl_name": "posix", 00:19:50.389 "recv_buf_size": 2097152, 00:19:50.389 "send_buf_size": 2097152, 00:19:50.389 "enable_recv_pipe": true, 00:19:50.389 "enable_quickack": false, 00:19:50.389 "enable_placement_id": 0, 00:19:50.389 "enable_zerocopy_send_server": true, 00:19:50.389 "enable_zerocopy_send_client": false, 00:19:50.389 "zerocopy_threshold": 0, 00:19:50.389 "tls_version": 0, 00:19:50.389 "enable_ktls": false 00:19:50.389 } 00:19:50.389 } 00:19:50.389 ] 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "subsystem": "vmd", 00:19:50.389 "config": [] 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "subsystem": "accel", 00:19:50.389 "config": [ 00:19:50.389 { 00:19:50.389 "method": "accel_set_options", 00:19:50.389 "params": { 00:19:50.389 "small_cache_size": 128, 00:19:50.389 "large_cache_size": 16, 00:19:50.389 "task_count": 2048, 00:19:50.389 "sequence_count": 2048, 00:19:50.389 "buf_count": 2048 00:19:50.389 } 00:19:50.389 } 00:19:50.389 ] 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "subsystem": "bdev", 00:19:50.389 "config": [ 00:19:50.389 { 00:19:50.389 "method": "bdev_set_options", 00:19:50.389 "params": { 00:19:50.389 "bdev_io_pool_size": 65535, 00:19:50.389 "bdev_io_cache_size": 256, 00:19:50.389 "bdev_auto_examine": true, 00:19:50.389 "iobuf_small_cache_size": 128, 00:19:50.389 "iobuf_large_cache_size": 16 00:19:50.389 } 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "method": "bdev_raid_set_options", 00:19:50.389 "params": { 00:19:50.389 "process_window_size_kb": 1024 00:19:50.389 } 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "method": "bdev_iscsi_set_options", 00:19:50.389 "params": { 00:19:50.389 "timeout_sec": 30 00:19:50.389 } 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "method": "bdev_nvme_set_options", 00:19:50.389 "params": { 00:19:50.389 "action_on_timeout": "none", 00:19:50.389 "timeout_us": 0, 00:19:50.389 "timeout_admin_us": 0, 00:19:50.389 "keep_alive_timeout_ms": 10000, 00:19:50.389 "arbitration_burst": 0, 00:19:50.389 "low_priority_weight": 0, 00:19:50.389 "medium_priority_weight": 0, 00:19:50.389 "high_priority_weight": 0, 00:19:50.389 "nvme_adminq_poll_period_us": 10000, 00:19:50.389 "nvme_ioq_poll_period_us": 0, 00:19:50.389 "io_queue_requests": 512, 00:19:50.389 "delay_cmd_submit": true, 00:19:50.389 "transport_retry_count": 4, 00:19:50.389 "bdev_retry_count": 3, 00:19:50.389 "transport_ack_timeout": 0, 00:19:50.389 "ctrlr_loss_timeout_sec": 0, 00:19:50.389 "reconnect_delay_sec": 0, 00:19:50.389 "fast_io_fail_timeout_sec": 0, 00:19:50.389 "disable_auto_failback": false, 00:19:50.389 "generate_uuids": false, 00:19:50.389 "transport_tos": 0, 00:19:50.389 "nvme_error_stat": false, 00:19:50.389 "rdma_srq_size": 0, 00:19:50.389 "io_path_stat": false, 00:19:50.389 "allow_accel_sequence": false, 00:19:50.389 "rdma_max_cq_size": 0, 00:19:50.389 "rdma_cm_event_timeout_ms": 0, 00:19:50.389 "dhchap_digests": [ 00:19:50.389 "sha256", 00:19:50.389 "sha384", 00:19:50.389 "sha512" 00:19:50.389 ], 00:19:50.389 "dhchap_dhgroups": [ 00:19:50.389 "null", 00:19:50.389 "ffdhe2048", 00:19:50.389 "ffdhe3072", 00:19:50.389 "ffdhe4096", 00:19:50.389 "ffdhe6144", 00:19:50.389 "ffdhe8192" 00:19:50.389 ] 00:19:50.389 } 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "method": "bdev_nvme_attach_controller", 00:19:50.389 "params": { 00:19:50.389 "name": "nvme0", 00:19:50.389 "trtype": "TCP", 00:19:50.389 "adrfam": "IPv4", 00:19:50.389 "traddr": "10.0.0.2", 00:19:50.389 "trsvcid": "4420", 00:19:50.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.389 "prchk_reftag": false, 00:19:50.389 "prchk_guard": false, 00:19:50.389 "ctrlr_loss_timeout_sec": 0, 00:19:50.389 "reconnect_delay_sec": 0, 00:19:50.389 "fast_io_fail_timeout_sec": 0, 00:19:50.389 "psk": "key0", 00:19:50.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.389 "hdgst": false, 00:19:50.389 "ddgst": false 00:19:50.389 } 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "method": "bdev_nvme_set_hotplug", 00:19:50.389 "params": { 00:19:50.389 "period_us": 100000, 00:19:50.389 "enable": false 00:19:50.389 } 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "method": "bdev_enable_histogram", 00:19:50.389 "params": { 00:19:50.389 "name": "nvme0n1", 00:19:50.389 "enable": true 00:19:50.389 } 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "method": "bdev_wait_for_examine" 00:19:50.389 } 00:19:50.389 ] 00:19:50.389 }, 00:19:50.389 { 00:19:50.389 "subsystem": "nbd", 00:19:50.389 "config": [] 00:19:50.389 } 00:19:50.389 ] 00:19:50.389 }' 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.389 15:56:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.389 [2024-07-12 15:56:47.477903] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:50.389 [2024-07-12 15:56:47.477990] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785063 ] 00:19:50.389 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.389 [2024-07-12 15:56:47.535377] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.389 [2024-07-12 15:56:47.641248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.648 [2024-07-12 15:56:47.815166] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.213 15:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.213 15:56:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:51.213 15:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:51.213 15:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:19:51.470 15:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.470 15:56:48 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.728 Running I/O for 1 seconds... 00:19:52.665 00:19:52.665 Latency(us) 00:19:52.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.665 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:52.665 Verification LBA range: start 0x0 length 0x2000 00:19:52.665 nvme0n1 : 1.02 3334.37 13.02 0.00 0.00 37961.15 6189.51 40001.23 00:19:52.665 =================================================================================================================== 00:19:52.665 Total : 3334.37 13.02 0.00 0.00 37961.15 6189.51 40001.23 00:19:52.665 0 00:19:52.665 15:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:52.666 nvmf_trace.0 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 785063 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 785063 ']' 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 785063 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 785063 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 785063' 00:19:52.666 killing process with pid 785063 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 785063 00:19:52.666 Received shutdown signal, test time was about 1.000000 seconds 00:19:52.666 00:19:52.666 Latency(us) 00:19:52.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.666 =================================================================================================================== 00:19:52.666 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.666 15:56:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 785063 00:19:52.924 15:56:50 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:52.924 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:52.924 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:52.924 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.924 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:52.924 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.924 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.924 rmmod nvme_tcp 00:19:53.181 rmmod nvme_fabrics 00:19:53.181 rmmod nvme_keyring 00:19:53.181 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.181 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:53.181 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:53.181 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 784909 ']' 00:19:53.181 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 784909 00:19:53.182 15:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 784909 ']' 00:19:53.182 15:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 784909 00:19:53.182 15:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:53.182 15:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.182 15:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 784909 00:19:53.182 15:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:53.182 15:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:53.182 15:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 784909' 00:19:53.182 killing process with pid 784909 00:19:53.182 15:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 784909 00:19:53.182 15:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 784909 00:19:53.442 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.442 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.442 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.442 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.442 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.442 15:56:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.442 15:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.442 15:56:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.346 15:56:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:55.346 15:56:52 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NusktBuLR7 /tmp/tmp.iPgBSD1ZU1 /tmp/tmp.yTLVvHckaJ 00:19:55.346 00:19:55.346 real 1m20.335s 00:19:55.346 user 2m7.799s 00:19:55.346 sys 0m28.809s 00:19:55.346 15:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:55.346 15:56:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:55.346 ************************************ 00:19:55.346 END TEST nvmf_tls 00:19:55.346 ************************************ 00:19:55.346 15:56:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:55.346 15:56:52 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:55.346 15:56:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:55.346 15:56:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.346 15:56:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:55.346 ************************************ 00:19:55.346 START TEST nvmf_fips 00:19:55.346 ************************************ 00:19:55.346 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:55.604 * Looking for test storage... 00:19:55.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:55.604 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:19:55.605 Error setting digest 00:19:55.605 0082C4C8D97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:55.605 0082C4C8D97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:55.605 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.606 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:55.606 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:55.606 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:55.606 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.606 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.606 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.606 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:55.606 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:55.606 15:56:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:55.606 15:56:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:58.138 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:19:58.139 Found 0000:84:00.0 (0x8086 - 0x159b) 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:19:58.139 Found 0000:84:00.1 (0x8086 - 0x159b) 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:19:58.139 Found net devices under 0000:84:00.0: cvl_0_0 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:19:58.139 Found net devices under 0000:84:00.1: cvl_0_1 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:58.139 15:56:54 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:58.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:19:58.139 00:19:58.139 --- 10.0.0.2 ping statistics --- 00:19:58.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.139 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:58.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:19:58.139 00:19:58.139 --- 10.0.0.1 ping statistics --- 00:19:58.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.139 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=787445 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 787445 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 787445 ']' 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.139 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:58.139 [2024-07-12 15:56:55.169536] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:58.139 [2024-07-12 15:56:55.169611] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.139 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.139 [2024-07-12 15:56:55.234415] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.139 [2024-07-12 15:56:55.342109] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.139 [2024-07-12 15:56:55.342162] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.139 [2024-07-12 15:56:55.342176] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.139 [2024-07-12 15:56:55.342187] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.139 [2024-07-12 15:56:55.342198] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.139 [2024-07-12 15:56:55.342224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:58.398 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:58.657 [2024-07-12 15:56:55.754603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.657 [2024-07-12 15:56:55.770578] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.657 [2024-07-12 15:56:55.770826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.657 [2024-07-12 15:56:55.801231] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:58.657 malloc0 00:19:58.657 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.657 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=787567 00:19:58.657 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.657 15:56:55 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 787567 /var/tmp/bdevperf.sock 00:19:58.657 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 787567 ']' 00:19:58.657 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.657 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.657 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.657 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.657 15:56:55 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:58.657 [2024-07-12 15:56:55.889781] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:19:58.657 [2024-07-12 15:56:55.889874] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787567 ] 00:19:58.657 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.916 [2024-07-12 15:56:55.950918] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.916 [2024-07-12 15:56:56.057228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.853 15:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.853 15:56:56 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:19:59.853 15:56:56 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:59.853 [2024-07-12 15:56:57.082477] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.853 [2024-07-12 15:56:57.082603] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:00.112 TLSTESTn1 00:20:00.112 15:56:57 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:00.112 Running I/O for 10 seconds... 00:20:10.090 00:20:10.090 Latency(us) 00:20:10.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.090 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:10.090 Verification LBA range: start 0x0 length 0x2000 00:20:10.090 TLSTESTn1 : 10.02 3556.18 13.89 0.00 0.00 35933.10 8980.86 55924.05 00:20:10.090 =================================================================================================================== 00:20:10.090 Total : 3556.18 13.89 0.00 0.00 35933.10 8980.86 55924.05 00:20:10.090 0 00:20:10.090 15:57:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:10.090 15:57:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:10.090 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:10.090 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:10.090 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:10.090 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:10.090 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:10.090 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:10.090 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:10.090 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:10.090 nvmf_trace.0 00:20:10.348 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:10.348 15:57:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 787567 00:20:10.348 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 787567 ']' 00:20:10.348 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 787567 00:20:10.348 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:10.348 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.348 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 787567 00:20:10.348 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:10.348 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:10.348 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 787567' 00:20:10.348 killing process with pid 787567 00:20:10.348 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 787567 00:20:10.348 Received shutdown signal, test time was about 10.000000 seconds 00:20:10.348 00:20:10.348 Latency(us) 00:20:10.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.348 =================================================================================================================== 00:20:10.348 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.348 [2024-07-12 15:57:07.411416] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:10.348 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 787567 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:10.606 rmmod nvme_tcp 00:20:10.606 rmmod nvme_fabrics 00:20:10.606 rmmod nvme_keyring 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 787445 ']' 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 787445 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 787445 ']' 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 787445 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 787445 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 787445' 00:20:10.606 killing process with pid 787445 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 787445 00:20:10.606 [2024-07-12 15:57:07.742083] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:10.606 15:57:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 787445 00:20:10.863 15:57:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:10.863 15:57:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:10.863 15:57:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:10.863 15:57:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.863 15:57:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:10.863 15:57:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.863 15:57:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.863 15:57:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.766 15:57:10 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.766 15:57:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:12.767 00:20:12.767 real 0m17.417s 00:20:12.767 user 0m21.964s 00:20:12.767 sys 0m6.764s 00:20:12.767 15:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.767 15:57:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:12.767 ************************************ 00:20:12.767 END TEST nvmf_fips 00:20:12.767 ************************************ 00:20:13.025 15:57:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:13.025 15:57:10 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:20:13.025 15:57:10 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:20:13.025 15:57:10 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:20:13.025 15:57:10 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:20:13.025 15:57:10 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:20:13.025 15:57:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:14.925 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:14.925 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:14.925 Found net devices under 0000:84:00.0: cvl_0_0 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.925 15:57:12 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.926 15:57:12 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:14.926 Found net devices under 0000:84:00.1: cvl_0_1 00:20:14.926 15:57:12 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.926 15:57:12 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:14.926 15:57:12 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.926 15:57:12 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:20:14.926 15:57:12 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:14.926 15:57:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:14.926 15:57:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.926 15:57:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.184 ************************************ 00:20:15.184 START TEST nvmf_perf_adq 00:20:15.184 ************************************ 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:15.184 * Looking for test storage... 00:20:15.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.184 15:57:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:15.185 15:57:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:17.090 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:17.090 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:17.090 Found net devices under 0000:84:00.0: cvl_0_0 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.090 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:17.090 Found net devices under 0000:84:00.1: cvl_0_1 00:20:17.091 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.091 15:57:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.091 15:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.091 15:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:17.091 15:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:17.091 15:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:17.091 15:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:18.028 15:57:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:19.961 15:57:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:25.231 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:25.232 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:25.232 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:25.232 Found net devices under 0000:84:00.0: cvl_0_0 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:25.232 Found net devices under 0000:84:00.1: cvl_0_1 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:25.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:20:25.232 00:20:25.232 --- 10.0.0.2 ping statistics --- 00:20:25.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.232 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:20:25.232 00:20:25.232 --- 10.0.0.1 ping statistics --- 00:20:25.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.232 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=794125 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 794125 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 794125 ']' 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.232 [2024-07-12 15:57:22.253366] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:20:25.232 [2024-07-12 15:57:22.253468] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.232 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.232 [2024-07-12 15:57:22.316880] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.232 [2024-07-12 15:57:22.424525] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.232 [2024-07-12 15:57:22.424583] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.232 [2024-07-12 15:57:22.424605] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.232 [2024-07-12 15:57:22.424614] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.232 [2024-07-12 15:57:22.424623] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.232 [2024-07-12 15:57:22.424702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.232 [2024-07-12 15:57:22.424780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.232 [2024-07-12 15:57:22.424856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.232 [2024-07-12 15:57:22.424858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.232 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.489 [2024-07-12 15:57:22.619281] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.489 Malloc1 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:25.489 [2024-07-12 15:57:22.669600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=794155 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:25.489 15:57:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:25.489 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.011 15:57:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:28.011 15:57:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.011 15:57:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:28.011 15:57:24 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.012 15:57:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:28.012 "tick_rate": 2700000000, 00:20:28.012 "poll_groups": [ 00:20:28.012 { 00:20:28.012 "name": "nvmf_tgt_poll_group_000", 00:20:28.012 "admin_qpairs": 1, 00:20:28.012 "io_qpairs": 1, 00:20:28.012 "current_admin_qpairs": 1, 00:20:28.012 "current_io_qpairs": 1, 00:20:28.012 "pending_bdev_io": 0, 00:20:28.012 "completed_nvme_io": 19717, 00:20:28.012 "transports": [ 00:20:28.012 { 00:20:28.012 "trtype": "TCP" 00:20:28.012 } 00:20:28.012 ] 00:20:28.012 }, 00:20:28.012 { 00:20:28.012 "name": "nvmf_tgt_poll_group_001", 00:20:28.012 "admin_qpairs": 0, 00:20:28.012 "io_qpairs": 1, 00:20:28.012 "current_admin_qpairs": 0, 00:20:28.012 "current_io_qpairs": 1, 00:20:28.012 "pending_bdev_io": 0, 00:20:28.012 "completed_nvme_io": 19920, 00:20:28.012 "transports": [ 00:20:28.012 { 00:20:28.012 "trtype": "TCP" 00:20:28.012 } 00:20:28.012 ] 00:20:28.012 }, 00:20:28.012 { 00:20:28.012 "name": "nvmf_tgt_poll_group_002", 00:20:28.012 "admin_qpairs": 0, 00:20:28.012 "io_qpairs": 1, 00:20:28.012 "current_admin_qpairs": 0, 00:20:28.012 "current_io_qpairs": 1, 00:20:28.012 "pending_bdev_io": 0, 00:20:28.012 "completed_nvme_io": 19994, 00:20:28.012 "transports": [ 00:20:28.012 { 00:20:28.012 "trtype": "TCP" 00:20:28.012 } 00:20:28.012 ] 00:20:28.012 }, 00:20:28.012 { 00:20:28.012 "name": "nvmf_tgt_poll_group_003", 00:20:28.012 "admin_qpairs": 0, 00:20:28.012 "io_qpairs": 1, 00:20:28.012 "current_admin_qpairs": 0, 00:20:28.012 "current_io_qpairs": 1, 00:20:28.012 "pending_bdev_io": 0, 00:20:28.012 "completed_nvme_io": 19529, 00:20:28.012 "transports": [ 00:20:28.012 { 00:20:28.012 "trtype": "TCP" 00:20:28.012 } 00:20:28.012 ] 00:20:28.012 } 00:20:28.012 ] 00:20:28.012 }' 00:20:28.012 15:57:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:28.012 15:57:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:28.012 15:57:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:28.012 15:57:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:28.012 15:57:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 794155 00:20:36.116 Initializing NVMe Controllers 00:20:36.116 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:36.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:36.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:36.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:36.116 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:36.116 Initialization complete. Launching workers. 00:20:36.116 ======================================================== 00:20:36.116 Latency(us) 00:20:36.116 Device Information : IOPS MiB/s Average min max 00:20:36.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10335.30 40.37 6172.83 2720.49 66331.23 00:20:36.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10400.90 40.63 6136.52 2300.39 60177.09 00:20:36.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10559.10 41.25 6042.66 2146.00 63519.71 00:20:36.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10419.30 40.70 6123.57 2241.91 64253.20 00:20:36.116 ======================================================== 00:20:36.116 Total : 41714.59 162.95 6118.52 2146.00 66331.23 00:20:36.116 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:36.116 rmmod nvme_tcp 00:20:36.116 rmmod nvme_fabrics 00:20:36.116 rmmod nvme_keyring 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 794125 ']' 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 794125 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 794125 ']' 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 794125 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 794125 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 794125' 00:20:36.116 killing process with pid 794125 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 794125 00:20:36.116 15:57:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 794125 00:20:36.116 15:57:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:36.116 15:57:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:36.116 15:57:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:36.116 15:57:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:36.116 15:57:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:36.116 15:57:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.116 15:57:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.116 15:57:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.020 15:57:35 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:38.020 15:57:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:38.020 15:57:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:38.585 15:57:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:41.114 15:57:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:46.385 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:46.385 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.385 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:46.386 Found net devices under 0000:84:00.0: cvl_0_0 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:46.386 Found net devices under 0000:84:00.1: cvl_0_1 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:46.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:20:46.386 00:20:46.386 --- 10.0.0.2 ping statistics --- 00:20:46.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.386 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:46.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:20:46.386 00:20:46.386 --- 10.0.0.1 ping statistics --- 00:20:46.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.386 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.386 15:57:42 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:46.386 net.core.busy_poll = 1 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:46.386 net.core.busy_read = 1 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=796768 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 796768 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 796768 ']' 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:46.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.386 [2024-07-12 15:57:43.208552] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:20:46.386 [2024-07-12 15:57:43.208666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:46.386 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.386 [2024-07-12 15:57:43.273569] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:46.386 [2024-07-12 15:57:43.382656] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:46.386 [2024-07-12 15:57:43.382709] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:46.386 [2024-07-12 15:57:43.382743] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:46.386 [2024-07-12 15:57:43.382755] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:46.386 [2024-07-12 15:57:43.382766] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:46.386 [2024-07-12 15:57:43.382849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.386 [2024-07-12 15:57:43.382923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.386 [2024-07-12 15:57:43.382983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.386 [2024-07-12 15:57:43.382981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.386 [2024-07-12 15:57:43.592348] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:46.386 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.387 Malloc1 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:46.387 [2024-07-12 15:57:43.643127] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=796918 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:46.387 15:57:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:46.387 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.919 15:57:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:48.919 15:57:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.919 15:57:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:48.919 15:57:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.919 15:57:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:48.919 "tick_rate": 2700000000, 00:20:48.919 "poll_groups": [ 00:20:48.919 { 00:20:48.919 "name": "nvmf_tgt_poll_group_000", 00:20:48.919 "admin_qpairs": 1, 00:20:48.919 "io_qpairs": 1, 00:20:48.919 "current_admin_qpairs": 1, 00:20:48.919 "current_io_qpairs": 1, 00:20:48.919 "pending_bdev_io": 0, 00:20:48.919 "completed_nvme_io": 24864, 00:20:48.919 "transports": [ 00:20:48.919 { 00:20:48.919 "trtype": "TCP" 00:20:48.919 } 00:20:48.919 ] 00:20:48.919 }, 00:20:48.919 { 00:20:48.919 "name": "nvmf_tgt_poll_group_001", 00:20:48.919 "admin_qpairs": 0, 00:20:48.919 "io_qpairs": 3, 00:20:48.919 "current_admin_qpairs": 0, 00:20:48.919 "current_io_qpairs": 3, 00:20:48.919 "pending_bdev_io": 0, 00:20:48.919 "completed_nvme_io": 26816, 00:20:48.919 "transports": [ 00:20:48.919 { 00:20:48.919 "trtype": "TCP" 00:20:48.919 } 00:20:48.919 ] 00:20:48.919 }, 00:20:48.919 { 00:20:48.919 "name": "nvmf_tgt_poll_group_002", 00:20:48.919 "admin_qpairs": 0, 00:20:48.919 "io_qpairs": 0, 00:20:48.919 "current_admin_qpairs": 0, 00:20:48.919 "current_io_qpairs": 0, 00:20:48.919 "pending_bdev_io": 0, 00:20:48.919 "completed_nvme_io": 0, 00:20:48.919 "transports": [ 00:20:48.919 { 00:20:48.919 "trtype": "TCP" 00:20:48.919 } 00:20:48.919 ] 00:20:48.919 }, 00:20:48.919 { 00:20:48.919 "name": "nvmf_tgt_poll_group_003", 00:20:48.919 "admin_qpairs": 0, 00:20:48.919 "io_qpairs": 0, 00:20:48.919 "current_admin_qpairs": 0, 00:20:48.919 "current_io_qpairs": 0, 00:20:48.919 "pending_bdev_io": 0, 00:20:48.919 "completed_nvme_io": 0, 00:20:48.919 "transports": [ 00:20:48.919 { 00:20:48.919 "trtype": "TCP" 00:20:48.919 } 00:20:48.919 ] 00:20:48.919 } 00:20:48.919 ] 00:20:48.919 }' 00:20:48.919 15:57:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:48.919 15:57:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:48.919 15:57:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:48.919 15:57:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:48.919 15:57:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 796918 00:20:57.023 Initializing NVMe Controllers 00:20:57.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:57.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:57.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:57.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:57.023 Initialization complete. Launching workers. 00:20:57.023 ======================================================== 00:20:57.023 Latency(us) 00:20:57.023 Device Information : IOPS MiB/s Average min max 00:20:57.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4625.80 18.07 13816.68 1957.11 71139.09 00:20:57.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4878.60 19.06 13085.20 2255.21 70487.04 00:20:57.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4442.50 17.35 14370.47 2289.16 71351.34 00:20:57.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13544.20 52.91 4710.53 1908.29 65499.18 00:20:57.023 ======================================================== 00:20:57.023 Total : 27491.09 107.39 9289.98 1908.29 71351.34 00:20:57.023 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:57.023 rmmod nvme_tcp 00:20:57.023 rmmod nvme_fabrics 00:20:57.023 rmmod nvme_keyring 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 796768 ']' 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 796768 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 796768 ']' 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 796768 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 796768 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 796768' 00:20:57.023 killing process with pid 796768 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 796768 00:20:57.023 15:57:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 796768 00:20:57.023 15:57:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:57.023 15:57:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:57.023 15:57:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:57.023 15:57:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:57.023 15:57:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:57.023 15:57:54 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.023 15:57:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.023 15:57:54 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.312 15:57:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:00.312 15:57:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:00.312 00:21:00.312 real 0m44.934s 00:21:00.312 user 2m39.155s 00:21:00.312 sys 0m9.808s 00:21:00.312 15:57:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:00.312 15:57:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.312 ************************************ 00:21:00.312 END TEST nvmf_perf_adq 00:21:00.312 ************************************ 00:21:00.312 15:57:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:00.312 15:57:57 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:00.312 15:57:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:00.312 15:57:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.312 15:57:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:00.312 ************************************ 00:21:00.312 START TEST nvmf_shutdown 00:21:00.312 ************************************ 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:00.312 * Looking for test storage... 00:21:00.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.312 15:57:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:00.313 ************************************ 00:21:00.313 START TEST nvmf_shutdown_tc1 00:21:00.313 ************************************ 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:00.313 15:57:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:02.213 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:02.214 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:02.214 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:02.214 Found net devices under 0000:84:00.0: cvl_0_0 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:02.214 Found net devices under 0000:84:00.1: cvl_0_1 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:02.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:21:02.214 00:21:02.214 --- 10.0.0.2 ping statistics --- 00:21:02.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.214 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:21:02.214 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:21:02.475 00:21:02.475 --- 10.0.0.1 ping statistics --- 00:21:02.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.475 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=800224 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 800224 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 800224 ']' 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.475 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:02.475 [2024-07-12 15:57:59.580061] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:02.475 [2024-07-12 15:57:59.580142] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.475 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.475 [2024-07-12 15:57:59.644964] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.475 [2024-07-12 15:57:59.754365] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.475 [2024-07-12 15:57:59.754431] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.475 [2024-07-12 15:57:59.754459] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.475 [2024-07-12 15:57:59.754470] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.475 [2024-07-12 15:57:59.754479] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.475 [2024-07-12 15:57:59.754530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.476 [2024-07-12 15:57:59.754590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.476 [2024-07-12 15:57:59.754662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.476 [2024-07-12 15:57:59.754658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:02.779 [2024-07-12 15:57:59.923667] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.779 15:57:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:02.779 Malloc1 00:21:02.779 [2024-07-12 15:58:00.013055] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.779 Malloc2 00:21:03.067 Malloc3 00:21:03.067 Malloc4 00:21:03.067 Malloc5 00:21:03.067 Malloc6 00:21:03.067 Malloc7 00:21:03.067 Malloc8 00:21:03.325 Malloc9 00:21:03.325 Malloc10 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=800290 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 800290 /var/tmp/bdevperf.sock 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 800290 ']' 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:03.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.325 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.325 { 00:21:03.325 "params": { 00:21:03.325 "name": "Nvme$subsystem", 00:21:03.325 "trtype": "$TEST_TRANSPORT", 00:21:03.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.325 "adrfam": "ipv4", 00:21:03.325 "trsvcid": "$NVMF_PORT", 00:21:03.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.325 "hdgst": ${hdgst:-false}, 00:21:03.325 "ddgst": ${ddgst:-false} 00:21:03.325 }, 00:21:03.325 "method": "bdev_nvme_attach_controller" 00:21:03.325 } 00:21:03.326 EOF 00:21:03.326 )") 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.326 { 00:21:03.326 "params": { 00:21:03.326 "name": "Nvme$subsystem", 00:21:03.326 "trtype": "$TEST_TRANSPORT", 00:21:03.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.326 "adrfam": "ipv4", 00:21:03.326 "trsvcid": "$NVMF_PORT", 00:21:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.326 "hdgst": ${hdgst:-false}, 00:21:03.326 "ddgst": ${ddgst:-false} 00:21:03.326 }, 00:21:03.326 "method": "bdev_nvme_attach_controller" 00:21:03.326 } 00:21:03.326 EOF 00:21:03.326 )") 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.326 { 00:21:03.326 "params": { 00:21:03.326 "name": "Nvme$subsystem", 00:21:03.326 "trtype": "$TEST_TRANSPORT", 00:21:03.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.326 "adrfam": "ipv4", 00:21:03.326 "trsvcid": "$NVMF_PORT", 00:21:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.326 "hdgst": ${hdgst:-false}, 00:21:03.326 "ddgst": ${ddgst:-false} 00:21:03.326 }, 00:21:03.326 "method": "bdev_nvme_attach_controller" 00:21:03.326 } 00:21:03.326 EOF 00:21:03.326 )") 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.326 { 00:21:03.326 "params": { 00:21:03.326 "name": "Nvme$subsystem", 00:21:03.326 "trtype": "$TEST_TRANSPORT", 00:21:03.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.326 "adrfam": "ipv4", 00:21:03.326 "trsvcid": "$NVMF_PORT", 00:21:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.326 "hdgst": ${hdgst:-false}, 00:21:03.326 "ddgst": ${ddgst:-false} 00:21:03.326 }, 00:21:03.326 "method": "bdev_nvme_attach_controller" 00:21:03.326 } 00:21:03.326 EOF 00:21:03.326 )") 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.326 { 00:21:03.326 "params": { 00:21:03.326 "name": "Nvme$subsystem", 00:21:03.326 "trtype": "$TEST_TRANSPORT", 00:21:03.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.326 "adrfam": "ipv4", 00:21:03.326 "trsvcid": "$NVMF_PORT", 00:21:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.326 "hdgst": ${hdgst:-false}, 00:21:03.326 "ddgst": ${ddgst:-false} 00:21:03.326 }, 00:21:03.326 "method": "bdev_nvme_attach_controller" 00:21:03.326 } 00:21:03.326 EOF 00:21:03.326 )") 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.326 { 00:21:03.326 "params": { 00:21:03.326 "name": "Nvme$subsystem", 00:21:03.326 "trtype": "$TEST_TRANSPORT", 00:21:03.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.326 "adrfam": "ipv4", 00:21:03.326 "trsvcid": "$NVMF_PORT", 00:21:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.326 "hdgst": ${hdgst:-false}, 00:21:03.326 "ddgst": ${ddgst:-false} 00:21:03.326 }, 00:21:03.326 "method": "bdev_nvme_attach_controller" 00:21:03.326 } 00:21:03.326 EOF 00:21:03.326 )") 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.326 { 00:21:03.326 "params": { 00:21:03.326 "name": "Nvme$subsystem", 00:21:03.326 "trtype": "$TEST_TRANSPORT", 00:21:03.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.326 "adrfam": "ipv4", 00:21:03.326 "trsvcid": "$NVMF_PORT", 00:21:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.326 "hdgst": ${hdgst:-false}, 00:21:03.326 "ddgst": ${ddgst:-false} 00:21:03.326 }, 00:21:03.326 "method": "bdev_nvme_attach_controller" 00:21:03.326 } 00:21:03.326 EOF 00:21:03.326 )") 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.326 { 00:21:03.326 "params": { 00:21:03.326 "name": "Nvme$subsystem", 00:21:03.326 "trtype": "$TEST_TRANSPORT", 00:21:03.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.326 "adrfam": "ipv4", 00:21:03.326 "trsvcid": "$NVMF_PORT", 00:21:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.326 "hdgst": ${hdgst:-false}, 00:21:03.326 "ddgst": ${ddgst:-false} 00:21:03.326 }, 00:21:03.326 "method": "bdev_nvme_attach_controller" 00:21:03.326 } 00:21:03.326 EOF 00:21:03.326 )") 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.326 { 00:21:03.326 "params": { 00:21:03.326 "name": "Nvme$subsystem", 00:21:03.326 "trtype": "$TEST_TRANSPORT", 00:21:03.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.326 "adrfam": "ipv4", 00:21:03.326 "trsvcid": "$NVMF_PORT", 00:21:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.326 "hdgst": ${hdgst:-false}, 00:21:03.326 "ddgst": ${ddgst:-false} 00:21:03.326 }, 00:21:03.326 "method": "bdev_nvme_attach_controller" 00:21:03.326 } 00:21:03.326 EOF 00:21:03.326 )") 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:03.326 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:03.326 { 00:21:03.326 "params": { 00:21:03.326 "name": "Nvme$subsystem", 00:21:03.326 "trtype": "$TEST_TRANSPORT", 00:21:03.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:03.326 "adrfam": "ipv4", 00:21:03.326 "trsvcid": "$NVMF_PORT", 00:21:03.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:03.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:03.326 "hdgst": ${hdgst:-false}, 00:21:03.326 "ddgst": ${ddgst:-false} 00:21:03.326 }, 00:21:03.326 "method": "bdev_nvme_attach_controller" 00:21:03.326 } 00:21:03.326 EOF 00:21:03.326 )") 00:21:03.327 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:03.327 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:03.327 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:03.327 15:58:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:03.327 "params": { 00:21:03.327 "name": "Nvme1", 00:21:03.327 "trtype": "tcp", 00:21:03.327 "traddr": "10.0.0.2", 00:21:03.327 "adrfam": "ipv4", 00:21:03.327 "trsvcid": "4420", 00:21:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.327 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.327 "hdgst": false, 00:21:03.327 "ddgst": false 00:21:03.327 }, 00:21:03.327 "method": "bdev_nvme_attach_controller" 00:21:03.327 },{ 00:21:03.327 "params": { 00:21:03.327 "name": "Nvme2", 00:21:03.327 "trtype": "tcp", 00:21:03.327 "traddr": "10.0.0.2", 00:21:03.327 "adrfam": "ipv4", 00:21:03.327 "trsvcid": "4420", 00:21:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:03.327 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:03.327 "hdgst": false, 00:21:03.327 "ddgst": false 00:21:03.327 }, 00:21:03.327 "method": "bdev_nvme_attach_controller" 00:21:03.327 },{ 00:21:03.327 "params": { 00:21:03.327 "name": "Nvme3", 00:21:03.327 "trtype": "tcp", 00:21:03.327 "traddr": "10.0.0.2", 00:21:03.327 "adrfam": "ipv4", 00:21:03.327 "trsvcid": "4420", 00:21:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:03.327 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:03.327 "hdgst": false, 00:21:03.327 "ddgst": false 00:21:03.327 }, 00:21:03.327 "method": "bdev_nvme_attach_controller" 00:21:03.327 },{ 00:21:03.327 "params": { 00:21:03.327 "name": "Nvme4", 00:21:03.327 "trtype": "tcp", 00:21:03.327 "traddr": "10.0.0.2", 00:21:03.327 "adrfam": "ipv4", 00:21:03.327 "trsvcid": "4420", 00:21:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:03.327 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:03.327 "hdgst": false, 00:21:03.327 "ddgst": false 00:21:03.327 }, 00:21:03.327 "method": "bdev_nvme_attach_controller" 00:21:03.327 },{ 00:21:03.327 "params": { 00:21:03.327 "name": "Nvme5", 00:21:03.327 "trtype": "tcp", 00:21:03.327 "traddr": "10.0.0.2", 00:21:03.327 "adrfam": "ipv4", 00:21:03.327 "trsvcid": "4420", 00:21:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:03.327 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:03.327 "hdgst": false, 00:21:03.327 "ddgst": false 00:21:03.327 }, 00:21:03.327 "method": "bdev_nvme_attach_controller" 00:21:03.327 },{ 00:21:03.327 "params": { 00:21:03.327 "name": "Nvme6", 00:21:03.327 "trtype": "tcp", 00:21:03.327 "traddr": "10.0.0.2", 00:21:03.327 "adrfam": "ipv4", 00:21:03.327 "trsvcid": "4420", 00:21:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:03.327 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:03.327 "hdgst": false, 00:21:03.327 "ddgst": false 00:21:03.327 }, 00:21:03.327 "method": "bdev_nvme_attach_controller" 00:21:03.327 },{ 00:21:03.327 "params": { 00:21:03.327 "name": "Nvme7", 00:21:03.327 "trtype": "tcp", 00:21:03.327 "traddr": "10.0.0.2", 00:21:03.327 "adrfam": "ipv4", 00:21:03.327 "trsvcid": "4420", 00:21:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:03.327 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:03.327 "hdgst": false, 00:21:03.327 "ddgst": false 00:21:03.327 }, 00:21:03.327 "method": "bdev_nvme_attach_controller" 00:21:03.327 },{ 00:21:03.327 "params": { 00:21:03.327 "name": "Nvme8", 00:21:03.327 "trtype": "tcp", 00:21:03.327 "traddr": "10.0.0.2", 00:21:03.327 "adrfam": "ipv4", 00:21:03.327 "trsvcid": "4420", 00:21:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:03.327 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:03.327 "hdgst": false, 00:21:03.327 "ddgst": false 00:21:03.327 }, 00:21:03.327 "method": "bdev_nvme_attach_controller" 00:21:03.327 },{ 00:21:03.327 "params": { 00:21:03.327 "name": "Nvme9", 00:21:03.327 "trtype": "tcp", 00:21:03.327 "traddr": "10.0.0.2", 00:21:03.327 "adrfam": "ipv4", 00:21:03.327 "trsvcid": "4420", 00:21:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:03.327 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:03.327 "hdgst": false, 00:21:03.327 "ddgst": false 00:21:03.327 }, 00:21:03.327 "method": "bdev_nvme_attach_controller" 00:21:03.327 },{ 00:21:03.327 "params": { 00:21:03.327 "name": "Nvme10", 00:21:03.327 "trtype": "tcp", 00:21:03.327 "traddr": "10.0.0.2", 00:21:03.327 "adrfam": "ipv4", 00:21:03.327 "trsvcid": "4420", 00:21:03.327 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:03.327 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:03.327 "hdgst": false, 00:21:03.327 "ddgst": false 00:21:03.327 }, 00:21:03.327 "method": "bdev_nvme_attach_controller" 00:21:03.327 }' 00:21:03.327 [2024-07-12 15:58:00.536645] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:03.327 [2024-07-12 15:58:00.536749] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:03.327 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.327 [2024-07-12 15:58:00.603594] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.585 [2024-07-12 15:58:00.715981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.481 15:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.481 15:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:05.481 15:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:05.481 15:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.481 15:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:05.481 15:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.481 15:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 800290 00:21:05.481 15:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:05.481 15:58:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:06.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 800290 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 800224 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.411 { 00:21:06.411 "params": { 00:21:06.411 "name": "Nvme$subsystem", 00:21:06.411 "trtype": "$TEST_TRANSPORT", 00:21:06.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.411 "adrfam": "ipv4", 00:21:06.411 "trsvcid": "$NVMF_PORT", 00:21:06.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.411 "hdgst": ${hdgst:-false}, 00:21:06.411 "ddgst": ${ddgst:-false} 00:21:06.411 }, 00:21:06.411 "method": "bdev_nvme_attach_controller" 00:21:06.411 } 00:21:06.411 EOF 00:21:06.411 )") 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.411 { 00:21:06.411 "params": { 00:21:06.411 "name": "Nvme$subsystem", 00:21:06.411 "trtype": "$TEST_TRANSPORT", 00:21:06.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.411 "adrfam": "ipv4", 00:21:06.411 "trsvcid": "$NVMF_PORT", 00:21:06.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.411 "hdgst": ${hdgst:-false}, 00:21:06.411 "ddgst": ${ddgst:-false} 00:21:06.411 }, 00:21:06.411 "method": "bdev_nvme_attach_controller" 00:21:06.411 } 00:21:06.411 EOF 00:21:06.411 )") 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.411 { 00:21:06.411 "params": { 00:21:06.411 "name": "Nvme$subsystem", 00:21:06.411 "trtype": "$TEST_TRANSPORT", 00:21:06.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.411 "adrfam": "ipv4", 00:21:06.411 "trsvcid": "$NVMF_PORT", 00:21:06.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.411 "hdgst": ${hdgst:-false}, 00:21:06.411 "ddgst": ${ddgst:-false} 00:21:06.411 }, 00:21:06.411 "method": "bdev_nvme_attach_controller" 00:21:06.411 } 00:21:06.411 EOF 00:21:06.411 )") 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.411 { 00:21:06.411 "params": { 00:21:06.411 "name": "Nvme$subsystem", 00:21:06.411 "trtype": "$TEST_TRANSPORT", 00:21:06.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.411 "adrfam": "ipv4", 00:21:06.411 "trsvcid": "$NVMF_PORT", 00:21:06.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.411 "hdgst": ${hdgst:-false}, 00:21:06.411 "ddgst": ${ddgst:-false} 00:21:06.411 }, 00:21:06.411 "method": "bdev_nvme_attach_controller" 00:21:06.411 } 00:21:06.411 EOF 00:21:06.411 )") 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.411 { 00:21:06.411 "params": { 00:21:06.411 "name": "Nvme$subsystem", 00:21:06.411 "trtype": "$TEST_TRANSPORT", 00:21:06.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.411 "adrfam": "ipv4", 00:21:06.411 "trsvcid": "$NVMF_PORT", 00:21:06.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.411 "hdgst": ${hdgst:-false}, 00:21:06.411 "ddgst": ${ddgst:-false} 00:21:06.411 }, 00:21:06.411 "method": "bdev_nvme_attach_controller" 00:21:06.411 } 00:21:06.411 EOF 00:21:06.411 )") 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.411 { 00:21:06.411 "params": { 00:21:06.411 "name": "Nvme$subsystem", 00:21:06.411 "trtype": "$TEST_TRANSPORT", 00:21:06.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.411 "adrfam": "ipv4", 00:21:06.411 "trsvcid": "$NVMF_PORT", 00:21:06.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.411 "hdgst": ${hdgst:-false}, 00:21:06.411 "ddgst": ${ddgst:-false} 00:21:06.411 }, 00:21:06.411 "method": "bdev_nvme_attach_controller" 00:21:06.411 } 00:21:06.411 EOF 00:21:06.411 )") 00:21:06.411 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.412 { 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme$subsystem", 00:21:06.412 "trtype": "$TEST_TRANSPORT", 00:21:06.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "$NVMF_PORT", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.412 "hdgst": ${hdgst:-false}, 00:21:06.412 "ddgst": ${ddgst:-false} 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 } 00:21:06.412 EOF 00:21:06.412 )") 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.412 { 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme$subsystem", 00:21:06.412 "trtype": "$TEST_TRANSPORT", 00:21:06.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "$NVMF_PORT", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.412 "hdgst": ${hdgst:-false}, 00:21:06.412 "ddgst": ${ddgst:-false} 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 } 00:21:06.412 EOF 00:21:06.412 )") 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.412 { 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme$subsystem", 00:21:06.412 "trtype": "$TEST_TRANSPORT", 00:21:06.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "$NVMF_PORT", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.412 "hdgst": ${hdgst:-false}, 00:21:06.412 "ddgst": ${ddgst:-false} 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 } 00:21:06.412 EOF 00:21:06.412 )") 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:06.412 { 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme$subsystem", 00:21:06.412 "trtype": "$TEST_TRANSPORT", 00:21:06.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "$NVMF_PORT", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:06.412 "hdgst": ${hdgst:-false}, 00:21:06.412 "ddgst": ${ddgst:-false} 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 } 00:21:06.412 EOF 00:21:06.412 )") 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:06.412 15:58:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme1", 00:21:06.412 "trtype": "tcp", 00:21:06.412 "traddr": "10.0.0.2", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "4420", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:06.412 "hdgst": false, 00:21:06.412 "ddgst": false 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 },{ 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme2", 00:21:06.412 "trtype": "tcp", 00:21:06.412 "traddr": "10.0.0.2", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "4420", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:06.412 "hdgst": false, 00:21:06.412 "ddgst": false 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 },{ 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme3", 00:21:06.412 "trtype": "tcp", 00:21:06.412 "traddr": "10.0.0.2", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "4420", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:06.412 "hdgst": false, 00:21:06.412 "ddgst": false 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 },{ 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme4", 00:21:06.412 "trtype": "tcp", 00:21:06.412 "traddr": "10.0.0.2", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "4420", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:06.412 "hdgst": false, 00:21:06.412 "ddgst": false 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 },{ 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme5", 00:21:06.412 "trtype": "tcp", 00:21:06.412 "traddr": "10.0.0.2", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "4420", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:06.412 "hdgst": false, 00:21:06.412 "ddgst": false 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 },{ 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme6", 00:21:06.412 "trtype": "tcp", 00:21:06.412 "traddr": "10.0.0.2", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "4420", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:06.412 "hdgst": false, 00:21:06.412 "ddgst": false 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 },{ 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme7", 00:21:06.412 "trtype": "tcp", 00:21:06.412 "traddr": "10.0.0.2", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "4420", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:06.412 "hdgst": false, 00:21:06.412 "ddgst": false 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 },{ 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme8", 00:21:06.412 "trtype": "tcp", 00:21:06.412 "traddr": "10.0.0.2", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "4420", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:06.412 "hdgst": false, 00:21:06.412 "ddgst": false 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 },{ 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme9", 00:21:06.412 "trtype": "tcp", 00:21:06.412 "traddr": "10.0.0.2", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "4420", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:06.412 "hdgst": false, 00:21:06.412 "ddgst": false 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 },{ 00:21:06.412 "params": { 00:21:06.412 "name": "Nvme10", 00:21:06.412 "trtype": "tcp", 00:21:06.412 "traddr": "10.0.0.2", 00:21:06.412 "adrfam": "ipv4", 00:21:06.412 "trsvcid": "4420", 00:21:06.412 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:06.412 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:06.412 "hdgst": false, 00:21:06.412 "ddgst": false 00:21:06.412 }, 00:21:06.412 "method": "bdev_nvme_attach_controller" 00:21:06.412 }' 00:21:06.412 [2024-07-12 15:58:03.641539] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:06.412 [2024-07-12 15:58:03.641635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800726 ] 00:21:06.412 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.670 [2024-07-12 15:58:03.706536] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.670 [2024-07-12 15:58:03.821998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.042 Running I/O for 1 seconds... 00:21:09.417 00:21:09.417 Latency(us) 00:21:09.417 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.417 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.417 Verification LBA range: start 0x0 length 0x400 00:21:09.417 Nvme1n1 : 1.10 240.91 15.06 0.00 0.00 259902.77 11747.93 262532.36 00:21:09.417 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.417 Verification LBA range: start 0x0 length 0x400 00:21:09.417 Nvme2n1 : 1.14 224.77 14.05 0.00 0.00 276277.48 19126.80 262532.36 00:21:09.417 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.417 Verification LBA range: start 0x0 length 0x400 00:21:09.417 Nvme3n1 : 1.12 228.77 14.30 0.00 0.00 267862.85 17185.00 267192.70 00:21:09.417 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.417 Verification LBA range: start 0x0 length 0x400 00:21:09.417 Nvme4n1 : 1.11 236.15 14.76 0.00 0.00 253500.63 7475.96 256318.58 00:21:09.417 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.417 Verification LBA range: start 0x0 length 0x400 00:21:09.417 Nvme5n1 : 1.13 226.48 14.15 0.00 0.00 261557.48 21651.15 260978.92 00:21:09.417 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.417 Verification LBA range: start 0x0 length 0x400 00:21:09.417 Nvme6n1 : 1.12 229.35 14.33 0.00 0.00 253497.84 23010.42 256318.58 00:21:09.417 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.417 Verification LBA range: start 0x0 length 0x400 00:21:09.417 Nvme7n1 : 1.13 231.54 14.47 0.00 0.00 246239.12 1171.15 243891.01 00:21:09.417 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.417 Verification LBA range: start 0x0 length 0x400 00:21:09.417 Nvme8n1 : 1.13 225.61 14.10 0.00 0.00 249225.86 18447.17 265639.25 00:21:09.417 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.417 Verification LBA range: start 0x0 length 0x400 00:21:09.418 Nvme9n1 : 1.18 270.87 16.93 0.00 0.00 204914.19 5704.06 282727.16 00:21:09.418 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:09.418 Verification LBA range: start 0x0 length 0x400 00:21:09.418 Nvme10n1 : 1.17 229.23 14.33 0.00 0.00 237169.17 1341.06 288940.94 00:21:09.418 =================================================================================================================== 00:21:09.418 Total : 2343.67 146.48 0.00 0.00 249866.70 1171.15 288940.94 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.676 rmmod nvme_tcp 00:21:09.676 rmmod nvme_fabrics 00:21:09.676 rmmod nvme_keyring 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 800224 ']' 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 800224 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 800224 ']' 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 800224 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 800224 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 800224' 00:21:09.676 killing process with pid 800224 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 800224 00:21:09.676 15:58:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 800224 00:21:10.242 15:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:10.242 15:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:10.242 15:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:10.242 15:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:10.242 15:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:10.242 15:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.242 15:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:10.242 15:58:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:12.776 00:21:12.776 real 0m12.199s 00:21:12.776 user 0m35.697s 00:21:12.776 sys 0m3.275s 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:12.776 ************************************ 00:21:12.776 END TEST nvmf_shutdown_tc1 00:21:12.776 ************************************ 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:12.776 ************************************ 00:21:12.776 START TEST nvmf_shutdown_tc2 00:21:12.776 ************************************ 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.776 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:12.777 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:12.777 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:12.777 Found net devices under 0000:84:00.0: cvl_0_0 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:12.777 Found net devices under 0000:84:00.1: cvl_0_1 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:21:12.777 00:21:12.777 --- 10.0.0.2 ping statistics --- 00:21:12.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.777 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:21:12.777 00:21:12.777 --- 10.0.0.1 ping statistics --- 00:21:12.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.777 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=801592 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 801592 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 801592 ']' 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.777 15:58:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:12.777 [2024-07-12 15:58:09.769580] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:12.777 [2024-07-12 15:58:09.769666] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.777 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.777 [2024-07-12 15:58:09.831208] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.777 [2024-07-12 15:58:09.935157] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.777 [2024-07-12 15:58:09.935214] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.777 [2024-07-12 15:58:09.935242] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.777 [2024-07-12 15:58:09.935254] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.777 [2024-07-12 15:58:09.935264] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.777 [2024-07-12 15:58:09.935873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.777 [2024-07-12 15:58:09.935932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.777 [2024-07-12 15:58:09.936000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:12.777 [2024-07-12 15:58:09.936003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.777 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.777 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:12.777 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:12.777 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:12.777 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 [2024-07-12 15:58:10.082446] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.037 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.037 Malloc1 00:21:13.037 [2024-07-12 15:58:10.158063] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.037 Malloc2 00:21:13.037 Malloc3 00:21:13.037 Malloc4 00:21:13.294 Malloc5 00:21:13.294 Malloc6 00:21:13.294 Malloc7 00:21:13.294 Malloc8 00:21:13.294 Malloc9 00:21:13.294 Malloc10 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=801673 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 801673 /var/tmp/bdevperf.sock 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 801673 ']' 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.552 { 00:21:13.552 "params": { 00:21:13.552 "name": "Nvme$subsystem", 00:21:13.552 "trtype": "$TEST_TRANSPORT", 00:21:13.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.552 "adrfam": "ipv4", 00:21:13.552 "trsvcid": "$NVMF_PORT", 00:21:13.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.552 "hdgst": ${hdgst:-false}, 00:21:13.552 "ddgst": ${ddgst:-false} 00:21:13.552 }, 00:21:13.552 "method": "bdev_nvme_attach_controller" 00:21:13.552 } 00:21:13.552 EOF 00:21:13.552 )") 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.552 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.553 { 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme$subsystem", 00:21:13.553 "trtype": "$TEST_TRANSPORT", 00:21:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "$NVMF_PORT", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.553 "hdgst": ${hdgst:-false}, 00:21:13.553 "ddgst": ${ddgst:-false} 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 } 00:21:13.553 EOF 00:21:13.553 )") 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.553 { 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme$subsystem", 00:21:13.553 "trtype": "$TEST_TRANSPORT", 00:21:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "$NVMF_PORT", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.553 "hdgst": ${hdgst:-false}, 00:21:13.553 "ddgst": ${ddgst:-false} 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 } 00:21:13.553 EOF 00:21:13.553 )") 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.553 { 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme$subsystem", 00:21:13.553 "trtype": "$TEST_TRANSPORT", 00:21:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "$NVMF_PORT", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.553 "hdgst": ${hdgst:-false}, 00:21:13.553 "ddgst": ${ddgst:-false} 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 } 00:21:13.553 EOF 00:21:13.553 )") 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.553 { 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme$subsystem", 00:21:13.553 "trtype": "$TEST_TRANSPORT", 00:21:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "$NVMF_PORT", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.553 "hdgst": ${hdgst:-false}, 00:21:13.553 "ddgst": ${ddgst:-false} 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 } 00:21:13.553 EOF 00:21:13.553 )") 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.553 { 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme$subsystem", 00:21:13.553 "trtype": "$TEST_TRANSPORT", 00:21:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "$NVMF_PORT", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.553 "hdgst": ${hdgst:-false}, 00:21:13.553 "ddgst": ${ddgst:-false} 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 } 00:21:13.553 EOF 00:21:13.553 )") 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.553 { 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme$subsystem", 00:21:13.553 "trtype": "$TEST_TRANSPORT", 00:21:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "$NVMF_PORT", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.553 "hdgst": ${hdgst:-false}, 00:21:13.553 "ddgst": ${ddgst:-false} 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 } 00:21:13.553 EOF 00:21:13.553 )") 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.553 { 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme$subsystem", 00:21:13.553 "trtype": "$TEST_TRANSPORT", 00:21:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "$NVMF_PORT", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.553 "hdgst": ${hdgst:-false}, 00:21:13.553 "ddgst": ${ddgst:-false} 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 } 00:21:13.553 EOF 00:21:13.553 )") 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.553 { 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme$subsystem", 00:21:13.553 "trtype": "$TEST_TRANSPORT", 00:21:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "$NVMF_PORT", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.553 "hdgst": ${hdgst:-false}, 00:21:13.553 "ddgst": ${ddgst:-false} 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 } 00:21:13.553 EOF 00:21:13.553 )") 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.553 { 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme$subsystem", 00:21:13.553 "trtype": "$TEST_TRANSPORT", 00:21:13.553 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "$NVMF_PORT", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.553 "hdgst": ${hdgst:-false}, 00:21:13.553 "ddgst": ${ddgst:-false} 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 } 00:21:13.553 EOF 00:21:13.553 )") 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:13.553 15:58:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme1", 00:21:13.553 "trtype": "tcp", 00:21:13.553 "traddr": "10.0.0.2", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "4420", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.553 "hdgst": false, 00:21:13.553 "ddgst": false 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 },{ 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme2", 00:21:13.553 "trtype": "tcp", 00:21:13.553 "traddr": "10.0.0.2", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "4420", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:13.553 "hdgst": false, 00:21:13.553 "ddgst": false 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 },{ 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme3", 00:21:13.553 "trtype": "tcp", 00:21:13.553 "traddr": "10.0.0.2", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "4420", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:13.553 "hdgst": false, 00:21:13.553 "ddgst": false 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 },{ 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme4", 00:21:13.553 "trtype": "tcp", 00:21:13.553 "traddr": "10.0.0.2", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "4420", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:13.553 "hdgst": false, 00:21:13.553 "ddgst": false 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 },{ 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme5", 00:21:13.553 "trtype": "tcp", 00:21:13.553 "traddr": "10.0.0.2", 00:21:13.553 "adrfam": "ipv4", 00:21:13.553 "trsvcid": "4420", 00:21:13.553 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:13.553 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:13.553 "hdgst": false, 00:21:13.553 "ddgst": false 00:21:13.553 }, 00:21:13.553 "method": "bdev_nvme_attach_controller" 00:21:13.553 },{ 00:21:13.553 "params": { 00:21:13.553 "name": "Nvme6", 00:21:13.554 "trtype": "tcp", 00:21:13.554 "traddr": "10.0.0.2", 00:21:13.554 "adrfam": "ipv4", 00:21:13.554 "trsvcid": "4420", 00:21:13.554 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:13.554 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:13.554 "hdgst": false, 00:21:13.554 "ddgst": false 00:21:13.554 }, 00:21:13.554 "method": "bdev_nvme_attach_controller" 00:21:13.554 },{ 00:21:13.554 "params": { 00:21:13.554 "name": "Nvme7", 00:21:13.554 "trtype": "tcp", 00:21:13.554 "traddr": "10.0.0.2", 00:21:13.554 "adrfam": "ipv4", 00:21:13.554 "trsvcid": "4420", 00:21:13.554 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:13.554 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:13.554 "hdgst": false, 00:21:13.554 "ddgst": false 00:21:13.554 }, 00:21:13.554 "method": "bdev_nvme_attach_controller" 00:21:13.554 },{ 00:21:13.554 "params": { 00:21:13.554 "name": "Nvme8", 00:21:13.554 "trtype": "tcp", 00:21:13.554 "traddr": "10.0.0.2", 00:21:13.554 "adrfam": "ipv4", 00:21:13.554 "trsvcid": "4420", 00:21:13.554 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:13.554 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:13.554 "hdgst": false, 00:21:13.554 "ddgst": false 00:21:13.554 }, 00:21:13.554 "method": "bdev_nvme_attach_controller" 00:21:13.554 },{ 00:21:13.554 "params": { 00:21:13.554 "name": "Nvme9", 00:21:13.554 "trtype": "tcp", 00:21:13.554 "traddr": "10.0.0.2", 00:21:13.554 "adrfam": "ipv4", 00:21:13.554 "trsvcid": "4420", 00:21:13.554 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:13.554 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:13.554 "hdgst": false, 00:21:13.554 "ddgst": false 00:21:13.554 }, 00:21:13.554 "method": "bdev_nvme_attach_controller" 00:21:13.554 },{ 00:21:13.554 "params": { 00:21:13.554 "name": "Nvme10", 00:21:13.554 "trtype": "tcp", 00:21:13.554 "traddr": "10.0.0.2", 00:21:13.554 "adrfam": "ipv4", 00:21:13.554 "trsvcid": "4420", 00:21:13.554 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:13.554 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:13.554 "hdgst": false, 00:21:13.554 "ddgst": false 00:21:13.554 }, 00:21:13.554 "method": "bdev_nvme_attach_controller" 00:21:13.554 }' 00:21:13.554 [2024-07-12 15:58:10.677915] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:13.554 [2024-07-12 15:58:10.678001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid801673 ] 00:21:13.554 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.554 [2024-07-12 15:58:10.740896] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.811 [2024-07-12 15:58:10.854255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.707 Running I/O for 10 seconds... 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:15.707 15:58:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:15.964 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:15.964 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:15.964 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:15.964 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:15.964 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.964 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:15.964 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.964 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=72 00:21:15.964 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 72 -ge 100 ']' 00:21:15.964 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=136 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 136 -ge 100 ']' 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 801673 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 801673 ']' 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 801673 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:16.221 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 801673 00:21:16.480 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:16.480 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:16.480 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 801673' 00:21:16.480 killing process with pid 801673 00:21:16.480 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 801673 00:21:16.480 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 801673 00:21:16.480 Received shutdown signal, test time was about 0.897446 seconds 00:21:16.480 00:21:16.480 Latency(us) 00:21:16.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.480 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.480 Verification LBA range: start 0x0 length 0x400 00:21:16.480 Nvme1n1 : 0.87 241.94 15.12 0.00 0.00 255620.45 12524.66 240784.12 00:21:16.480 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.480 Verification LBA range: start 0x0 length 0x400 00:21:16.480 Nvme2n1 : 0.86 228.70 14.29 0.00 0.00 268267.62 1723.35 251658.24 00:21:16.480 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.480 Verification LBA range: start 0x0 length 0x400 00:21:16.480 Nvme3n1 : 0.85 226.09 14.13 0.00 0.00 266982.59 18544.26 259425.47 00:21:16.480 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.480 Verification LBA range: start 0x0 length 0x400 00:21:16.480 Nvme4n1 : 0.90 285.52 17.84 0.00 0.00 207655.82 15631.55 246997.90 00:21:16.480 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.480 Verification LBA range: start 0x0 length 0x400 00:21:16.480 Nvme5n1 : 0.88 218.30 13.64 0.00 0.00 265225.61 20194.80 267192.70 00:21:16.480 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.480 Verification LBA range: start 0x0 length 0x400 00:21:16.480 Nvme6n1 : 0.88 217.39 13.59 0.00 0.00 260455.79 19806.44 260978.92 00:21:16.480 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.480 Verification LBA range: start 0x0 length 0x400 00:21:16.480 Nvme7n1 : 0.87 221.88 13.87 0.00 0.00 248373.35 37865.24 233016.89 00:21:16.480 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.480 Verification LBA range: start 0x0 length 0x400 00:21:16.480 Nvme8n1 : 0.87 220.38 13.77 0.00 0.00 244231.08 19223.89 250104.79 00:21:16.480 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.480 Verification LBA range: start 0x0 length 0x400 00:21:16.480 Nvme9n1 : 0.89 215.03 13.44 0.00 0.00 245590.09 20486.07 285834.05 00:21:16.480 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:16.480 Verification LBA range: start 0x0 length 0x400 00:21:16.480 Nvme10n1 : 0.89 216.05 13.50 0.00 0.00 238395.54 24758.04 268746.15 00:21:16.480 =================================================================================================================== 00:21:16.480 Total : 2291.27 143.20 0.00 0.00 248821.97 1723.35 285834.05 00:21:16.738 15:58:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 801592 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:17.670 rmmod nvme_tcp 00:21:17.670 rmmod nvme_fabrics 00:21:17.670 rmmod nvme_keyring 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 801592 ']' 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 801592 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 801592 ']' 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 801592 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.670 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 801592 00:21:17.927 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:17.927 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:17.927 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 801592' 00:21:17.927 killing process with pid 801592 00:21:17.927 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 801592 00:21:17.927 15:58:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 801592 00:21:18.186 15:58:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.186 15:58:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:18.186 15:58:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:18.186 15:58:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:18.186 15:58:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:18.186 15:58:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.186 15:58:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.186 15:58:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:20.718 00:21:20.718 real 0m7.968s 00:21:20.718 user 0m24.688s 00:21:20.718 sys 0m1.465s 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:20.718 ************************************ 00:21:20.718 END TEST nvmf_shutdown_tc2 00:21:20.718 ************************************ 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:20.718 ************************************ 00:21:20.718 START TEST nvmf_shutdown_tc3 00:21:20.718 ************************************ 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:20.718 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:20.718 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:20.718 Found net devices under 0000:84:00.0: cvl_0_0 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:20.718 Found net devices under 0000:84:00.1: cvl_0_1 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:20.718 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:20.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:21:20.719 00:21:20.719 --- 10.0.0.2 ping statistics --- 00:21:20.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.719 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:20.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:21:20.719 00:21:20.719 --- 10.0.0.1 ping statistics --- 00:21:20.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.719 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=802703 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 802703 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 802703 ']' 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.719 15:58:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:20.719 [2024-07-12 15:58:17.799441] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:20.719 [2024-07-12 15:58:17.799508] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.719 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.719 [2024-07-12 15:58:17.860120] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:20.719 [2024-07-12 15:58:17.961492] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.719 [2024-07-12 15:58:17.961548] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.719 [2024-07-12 15:58:17.961560] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.719 [2024-07-12 15:58:17.961571] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.719 [2024-07-12 15:58:17.961581] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.719 [2024-07-12 15:58:17.961664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.719 [2024-07-12 15:58:17.961724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.719 [2024-07-12 15:58:17.961860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:20.719 [2024-07-12 15:58:17.961863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:20.976 [2024-07-12 15:58:18.111404] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.976 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:20.976 Malloc1 00:21:20.976 [2024-07-12 15:58:18.190388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.976 Malloc2 00:21:20.976 Malloc3 00:21:21.239 Malloc4 00:21:21.239 Malloc5 00:21:21.239 Malloc6 00:21:21.239 Malloc7 00:21:21.239 Malloc8 00:21:21.496 Malloc9 00:21:21.496 Malloc10 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=802762 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 802762 /var/tmp/bdevperf.sock 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 802762 ']' 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.496 { 00:21:21.496 "params": { 00:21:21.496 "name": "Nvme$subsystem", 00:21:21.496 "trtype": "$TEST_TRANSPORT", 00:21:21.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.496 "adrfam": "ipv4", 00:21:21.496 "trsvcid": "$NVMF_PORT", 00:21:21.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.496 "hdgst": ${hdgst:-false}, 00:21:21.496 "ddgst": ${ddgst:-false} 00:21:21.496 }, 00:21:21.496 "method": "bdev_nvme_attach_controller" 00:21:21.496 } 00:21:21.496 EOF 00:21:21.496 )") 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.496 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.496 { 00:21:21.496 "params": { 00:21:21.496 "name": "Nvme$subsystem", 00:21:21.496 "trtype": "$TEST_TRANSPORT", 00:21:21.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.496 "adrfam": "ipv4", 00:21:21.496 "trsvcid": "$NVMF_PORT", 00:21:21.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.497 "hdgst": ${hdgst:-false}, 00:21:21.497 "ddgst": ${ddgst:-false} 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 } 00:21:21.497 EOF 00:21:21.497 )") 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.497 { 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme$subsystem", 00:21:21.497 "trtype": "$TEST_TRANSPORT", 00:21:21.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "$NVMF_PORT", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.497 "hdgst": ${hdgst:-false}, 00:21:21.497 "ddgst": ${ddgst:-false} 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 } 00:21:21.497 EOF 00:21:21.497 )") 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.497 { 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme$subsystem", 00:21:21.497 "trtype": "$TEST_TRANSPORT", 00:21:21.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "$NVMF_PORT", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.497 "hdgst": ${hdgst:-false}, 00:21:21.497 "ddgst": ${ddgst:-false} 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 } 00:21:21.497 EOF 00:21:21.497 )") 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.497 { 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme$subsystem", 00:21:21.497 "trtype": "$TEST_TRANSPORT", 00:21:21.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "$NVMF_PORT", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.497 "hdgst": ${hdgst:-false}, 00:21:21.497 "ddgst": ${ddgst:-false} 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 } 00:21:21.497 EOF 00:21:21.497 )") 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.497 { 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme$subsystem", 00:21:21.497 "trtype": "$TEST_TRANSPORT", 00:21:21.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "$NVMF_PORT", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.497 "hdgst": ${hdgst:-false}, 00:21:21.497 "ddgst": ${ddgst:-false} 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 } 00:21:21.497 EOF 00:21:21.497 )") 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.497 { 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme$subsystem", 00:21:21.497 "trtype": "$TEST_TRANSPORT", 00:21:21.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "$NVMF_PORT", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.497 "hdgst": ${hdgst:-false}, 00:21:21.497 "ddgst": ${ddgst:-false} 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 } 00:21:21.497 EOF 00:21:21.497 )") 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.497 { 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme$subsystem", 00:21:21.497 "trtype": "$TEST_TRANSPORT", 00:21:21.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "$NVMF_PORT", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.497 "hdgst": ${hdgst:-false}, 00:21:21.497 "ddgst": ${ddgst:-false} 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 } 00:21:21.497 EOF 00:21:21.497 )") 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.497 { 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme$subsystem", 00:21:21.497 "trtype": "$TEST_TRANSPORT", 00:21:21.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "$NVMF_PORT", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.497 "hdgst": ${hdgst:-false}, 00:21:21.497 "ddgst": ${ddgst:-false} 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 } 00:21:21.497 EOF 00:21:21.497 )") 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:21.497 { 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme$subsystem", 00:21:21.497 "trtype": "$TEST_TRANSPORT", 00:21:21.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "$NVMF_PORT", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.497 "hdgst": ${hdgst:-false}, 00:21:21.497 "ddgst": ${ddgst:-false} 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 } 00:21:21.497 EOF 00:21:21.497 )") 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:21.497 15:58:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme1", 00:21:21.497 "trtype": "tcp", 00:21:21.497 "traddr": "10.0.0.2", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "4420", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.497 "hdgst": false, 00:21:21.497 "ddgst": false 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 },{ 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme2", 00:21:21.497 "trtype": "tcp", 00:21:21.497 "traddr": "10.0.0.2", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "4420", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:21.497 "hdgst": false, 00:21:21.497 "ddgst": false 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 },{ 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme3", 00:21:21.497 "trtype": "tcp", 00:21:21.497 "traddr": "10.0.0.2", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "4420", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:21.497 "hdgst": false, 00:21:21.497 "ddgst": false 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 },{ 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme4", 00:21:21.497 "trtype": "tcp", 00:21:21.497 "traddr": "10.0.0.2", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "4420", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:21.497 "hdgst": false, 00:21:21.497 "ddgst": false 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 },{ 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme5", 00:21:21.497 "trtype": "tcp", 00:21:21.497 "traddr": "10.0.0.2", 00:21:21.497 "adrfam": "ipv4", 00:21:21.497 "trsvcid": "4420", 00:21:21.497 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:21.497 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:21.497 "hdgst": false, 00:21:21.497 "ddgst": false 00:21:21.497 }, 00:21:21.497 "method": "bdev_nvme_attach_controller" 00:21:21.497 },{ 00:21:21.497 "params": { 00:21:21.497 "name": "Nvme6", 00:21:21.497 "trtype": "tcp", 00:21:21.497 "traddr": "10.0.0.2", 00:21:21.497 "adrfam": "ipv4", 00:21:21.498 "trsvcid": "4420", 00:21:21.498 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:21.498 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:21.498 "hdgst": false, 00:21:21.498 "ddgst": false 00:21:21.498 }, 00:21:21.498 "method": "bdev_nvme_attach_controller" 00:21:21.498 },{ 00:21:21.498 "params": { 00:21:21.498 "name": "Nvme7", 00:21:21.498 "trtype": "tcp", 00:21:21.498 "traddr": "10.0.0.2", 00:21:21.498 "adrfam": "ipv4", 00:21:21.498 "trsvcid": "4420", 00:21:21.498 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:21.498 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:21.498 "hdgst": false, 00:21:21.498 "ddgst": false 00:21:21.498 }, 00:21:21.498 "method": "bdev_nvme_attach_controller" 00:21:21.498 },{ 00:21:21.498 "params": { 00:21:21.498 "name": "Nvme8", 00:21:21.498 "trtype": "tcp", 00:21:21.498 "traddr": "10.0.0.2", 00:21:21.498 "adrfam": "ipv4", 00:21:21.498 "trsvcid": "4420", 00:21:21.498 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:21.498 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:21.498 "hdgst": false, 00:21:21.498 "ddgst": false 00:21:21.498 }, 00:21:21.498 "method": "bdev_nvme_attach_controller" 00:21:21.498 },{ 00:21:21.498 "params": { 00:21:21.498 "name": "Nvme9", 00:21:21.498 "trtype": "tcp", 00:21:21.498 "traddr": "10.0.0.2", 00:21:21.498 "adrfam": "ipv4", 00:21:21.498 "trsvcid": "4420", 00:21:21.498 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:21.498 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:21.498 "hdgst": false, 00:21:21.498 "ddgst": false 00:21:21.498 }, 00:21:21.498 "method": "bdev_nvme_attach_controller" 00:21:21.498 },{ 00:21:21.498 "params": { 00:21:21.498 "name": "Nvme10", 00:21:21.498 "trtype": "tcp", 00:21:21.498 "traddr": "10.0.0.2", 00:21:21.498 "adrfam": "ipv4", 00:21:21.498 "trsvcid": "4420", 00:21:21.498 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:21.498 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:21.498 "hdgst": false, 00:21:21.498 "ddgst": false 00:21:21.498 }, 00:21:21.498 "method": "bdev_nvme_attach_controller" 00:21:21.498 }' 00:21:21.498 [2024-07-12 15:58:18.679883] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:21.498 [2024-07-12 15:58:18.679961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802762 ] 00:21:21.498 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.498 [2024-07-12 15:58:18.745920] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.755 [2024-07-12 15:58:18.858379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.651 Running I/O for 10 seconds... 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:23.651 15:58:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:23.908 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:23.908 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:23.908 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:23.908 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:23.908 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.908 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.908 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.908 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:23.908 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:23.908 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:24.166 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:24.166 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:24.166 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:24.166 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:24.166 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.166 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 802703 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 802703 ']' 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 802703 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 802703 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 802703' 00:21:24.440 killing process with pid 802703 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 802703 00:21:24.440 15:58:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 802703 00:21:24.440 [2024-07-12 15:58:21.520819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.440 [2024-07-12 15:58:21.520903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.440 [2024-07-12 15:58:21.520919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.440 [2024-07-12 15:58:21.520932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.440 [2024-07-12 15:58:21.520945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.440 [2024-07-12 15:58:21.520957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.440 [2024-07-12 15:58:21.520969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.440 [2024-07-12 15:58:21.520982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.440 [2024-07-12 15:58:21.520994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.440 [2024-07-12 15:58:21.521007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.440 [2024-07-12 15:58:21.521034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521328] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521400] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.521423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef4b0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.522998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.441 [2024-07-12 15:58:21.523578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.523589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.523601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.523616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f20f0 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.524999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.525869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25ef990 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.442 [2024-07-12 15:58:21.529439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529674] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.529978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0850 is same with the state(5) to be set 00:21:24.443 [2024-07-12 15:58:21.530273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.530972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.530986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.531002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.531016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.531031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.443 [2024-07-12 15:58:21.531056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.443 [2024-07-12 15:58:21.531071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 15:58:21.531218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12[2024-07-12 15:58:21.531273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with [2024-07-12 15:58:21.531288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:24.444 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 15:58:21.531367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with [2024-07-12 15:58:21.531426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:24.444 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531455] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531468] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:12[2024-07-12 15:58:21.531506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 15:58:21.531520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:12[2024-07-12 15:58:21.531601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with [2024-07-12 15:58:21.531615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:24.444 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.444 [2024-07-12 15:58:21.531672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.444 [2024-07-12 15:58:21.531680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.444 [2024-07-12 15:58:21.531685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:12[2024-07-12 15:58:21.531697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with [2024-07-12 15:58:21.531711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:24.445 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.531734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.531775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.531789] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.531803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.531815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:12[2024-07-12 15:58:21.531828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 15:58:21.531843] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531862] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.531875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.531888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.531901] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.531914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:12[2024-07-12 15:58:21.531927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.531956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.531969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.531982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.531993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:12[2024-07-12 15:58:21.531995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.532009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with [2024-07-12 15:58:21.532009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:24.445 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.532039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.532069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.532085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with [2024-07-12 15:58:21.532102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:24.445 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.532119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f0d50 is same with the state(5) to be set 00:21:24.445 [2024-07-12 15:58:21.532133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.445 [2024-07-12 15:58:21.532423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.532465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:24.445 [2024-07-12 15:58:21.532538] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1790040 was disconnected and freed. reset controller. 00:21:24.445 [2024-07-12 15:58:21.533097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.445 [2024-07-12 15:58:21.533121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.533136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.445 [2024-07-12 15:58:21.533149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.533163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.445 [2024-07-12 15:58:21.533175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.533189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.445 [2024-07-12 15:58:21.533202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.445 [2024-07-12 15:58:21.533214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194a520 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297610 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795d90 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-12 15:58:21.533646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with [2024-07-12 15:58:21.533661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:21:24.446 id:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with [2024-07-12 15:58:21.533677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:21:24.446 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with [2024-07-12 15:58:21.533692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:21:24.446 id:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with [2024-07-12 15:58:21.533710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:21:24.446 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with [2024-07-12 15:58:21.533726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1961ac0 is same the state(5) to be set 00:21:24.446 with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-12 15:58:21.533829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.533922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b9d50 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.533977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.533990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-12 15:58:21.533991] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.534006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with [2024-07-12 15:58:21.534007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:21:24.446 id:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.534021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with [2024-07-12 15:58:21.534023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:21:24.446 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.534037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.534039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.534050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.534067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with [2024-07-12 15:58:21.534067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:21:24.446 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.534081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.534084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.446 [2024-07-12 15:58:21.534094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.534098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.446 [2024-07-12 15:58:21.534106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.534111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18665c0 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.534131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.446 [2024-07-12 15:58:21.534143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.447 [2024-07-12 15:58:21.534173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-12 15:58:21.534198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with [2024-07-12 15:58:21.534215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:21:24.447 id:0 cdw10:00000000 cdw11:00000000 00:21:24.447 [2024-07-12 15:58:21.534228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.447 [2024-07-12 15:58:21.534263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-12 15:58:21.534277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:24.447 the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c51a0 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.447 [2024-07-12 15:58:21.534384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-12 15:58:21.534407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:24.447 the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with [2024-07-12 15:58:21.534424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:21:24.447 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1230 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.447 [2024-07-12 15:58:21.534454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.447 [2024-07-12 15:58:21.534480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c1a40 is same with the state(5) to be set 00:21:24.447 [2024-07-12 15:58:21.534731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.534787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.534825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.534856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.534885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.534915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.534945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.534974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.534990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.447 [2024-07-12 15:58:21.535541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.447 [2024-07-12 15:58:21.535539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.535568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with [2024-07-12 15:58:21.535571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1the state(5) to be set 00:21:24.448 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.535585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with [2024-07-12 15:58:21.535586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:24.448 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.535600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.535613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.535626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.535646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with [2024-07-12 15:58:21.535646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:24.448 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.535661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.535673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.535686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1[2024-07-12 15:58:21.535699] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 15:58:21.535713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-07-12 15:58:21.535746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with [2024-07-12 15:58:21.535780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:24.448 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.535794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.535807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.535820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.535833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.535846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with [2024-07-12 15:58:21.535859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:1the state(5) to be set 00:21:24.448 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.535873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.535886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1[2024-07-12 15:58:21.535899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with [2024-07-12 15:58:21.535913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:24.448 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.535933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.535947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.535960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.535972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.535985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.535997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with [2024-07-12 15:58:21.535997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1the state(5) to be set 00:21:24.448 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.536012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.536025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.536064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.536070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.536084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 15:58:21.536096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536110] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.536122] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.536135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.536148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.536161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with [2024-07-12 15:58:21.536174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1the state(5) to be set 00:21:24.448 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.536188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with [2024-07-12 15:58:21.536190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:24.448 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.536203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.536216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.536228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.448 [2024-07-12 15:58:21.536241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.448 [2024-07-12 15:58:21.536250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.448 [2024-07-12 15:58:21.536253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with [2024-07-12 15:58:21.536265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1the state(5) to be set 00:21:24.449 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with [2024-07-12 15:58:21.536281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:24.449 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1[2024-07-12 15:58:21.536332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 15:58:21.536346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-12 15:58:21.536437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1710 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.536466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.449 [2024-07-12 15:58:21.536847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.449 [2024-07-12 15:58:21.536922] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18e6430 was disconnected and freed. reset controller. 00:21:24.449 [2024-07-12 15:58:21.537216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537478] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.449 [2024-07-12 15:58:21.537489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537538] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537642] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537730] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.537995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.538008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.538020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.538040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25f1bf0 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.538548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:24.450 [2024-07-12 15:58:21.538584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b9d50 (9): Bad file descriptor 00:21:24.450 [2024-07-12 15:58:21.540275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:24.450 [2024-07-12 15:58:21.540307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c1a40 (9): Bad file descriptor 00:21:24.450 [2024-07-12 15:58:21.541257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.450 [2024-07-12 15:58:21.541288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b9d50 with addr=10.0.0.2, port=4420 00:21:24.450 [2024-07-12 15:58:21.541306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b9d50 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.541760] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:24.450 [2024-07-12 15:58:21.541838] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:24.450 [2024-07-12 15:58:21.542108] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:24.450 [2024-07-12 15:58:21.542270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.450 [2024-07-12 15:58:21.542297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c1a40 with addr=10.0.0.2, port=4420 00:21:24.450 [2024-07-12 15:58:21.542313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c1a40 is same with the state(5) to be set 00:21:24.450 [2024-07-12 15:58:21.542333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b9d50 (9): Bad file descriptor 00:21:24.450 [2024-07-12 15:58:21.542380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.450 [2024-07-12 15:58:21.542951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.450 [2024-07-12 15:58:21.542967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.542986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.543972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.543986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.544002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.544016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.544034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.544048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.544064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.544079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.544096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.544111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.544126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.544141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.544156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.544170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.544190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.544206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.544222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.544236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.544252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.544267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.544283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.451 [2024-07-12 15:58:21.544297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.451 [2024-07-12 15:58:21.544313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.452 [2024-07-12 15:58:21.544327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.544344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.452 [2024-07-12 15:58:21.544358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.544374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.452 [2024-07-12 15:58:21.544388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.544404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.452 [2024-07-12 15:58:21.544418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.544434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.452 [2024-07-12 15:58:21.544449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.544463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x183e370 is same with the state(5) to be set 00:21:24.452 [2024-07-12 15:58:21.544537] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x183e370 was disconnected and freed. reset controller. 00:21:24.452 [2024-07-12 15:58:21.544623] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:24.452 [2024-07-12 15:58:21.544783] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:24.452 [2024-07-12 15:58:21.544865] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:24.452 [2024-07-12 15:58:21.544982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c1a40 (9): Bad file descriptor 00:21:24.452 [2024-07-12 15:58:21.545009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:24.452 [2024-07-12 15:58:21.545024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:24.452 [2024-07-12 15:58:21.545041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:24.452 [2024-07-12 15:58:21.545092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194a520 (9): Bad file descriptor 00:21:24.452 [2024-07-12 15:58:21.545131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1297610 (9): Bad file descriptor 00:21:24.452 [2024-07-12 15:58:21.545185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.452 [2024-07-12 15:58:21.545216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.545231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.452 [2024-07-12 15:58:21.545245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.545260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.452 [2024-07-12 15:58:21.545273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.545287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.452 [2024-07-12 15:58:21.545301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.545314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185ca20 is same with the state(5) to be set 00:21:24.452 [2024-07-12 15:58:21.545342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1795d90 (9): Bad file descriptor 00:21:24.452 [2024-07-12 15:58:21.545368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1961ac0 (9): Bad file descriptor 00:21:24.452 [2024-07-12 15:58:21.545415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.452 [2024-07-12 15:58:21.545436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.545452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.452 [2024-07-12 15:58:21.545466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.545480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.452 [2024-07-12 15:58:21.545500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.545514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.452 [2024-07-12 15:58:21.545527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.545540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956040 is same with the state(5) to be set 00:21:24.452 [2024-07-12 15:58:21.545570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18665c0 (9): Bad file descriptor 00:21:24.452 [2024-07-12 15:58:21.545602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c51a0 (9): Bad file descriptor 00:21:24.452 [2024-07-12 15:58:21.546910] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:24.452 [2024-07-12 15:58:21.546952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.452 [2024-07-12 15:58:21.546974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.452 [2024-07-12 15:58:21.547013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:24.452 [2024-07-12 15:58:21.547031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:24.452 [2024-07-12 15:58:21.547046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:24.452 [2024-07-12 15:58:21.547133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.452 [2024-07-12 15:58:21.547255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.452 [2024-07-12 15:58:21.547283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1795d90 with addr=10.0.0.2, port=4420 00:21:24.452 [2024-07-12 15:58:21.547299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795d90 is same with the state(5) to be set 00:21:24.452 [2024-07-12 15:58:21.547643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1795d90 (9): Bad file descriptor 00:21:24.452 [2024-07-12 15:58:21.547713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.452 [2024-07-12 15:58:21.547754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.452 [2024-07-12 15:58:21.547772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.452 [2024-07-12 15:58:21.547838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.452 [2024-07-12 15:58:21.550449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:24.452 [2024-07-12 15:58:21.550687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.452 [2024-07-12 15:58:21.550714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b9d50 with addr=10.0.0.2, port=4420 00:21:24.452 [2024-07-12 15:58:21.550745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b9d50 is same with the state(5) to be set 00:21:24.452 [2024-07-12 15:58:21.550805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b9d50 (9): Bad file descriptor 00:21:24.452 [2024-07-12 15:58:21.550863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:24.452 [2024-07-12 15:58:21.550880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:24.452 [2024-07-12 15:58:21.550894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:24.452 [2024-07-12 15:58:21.550960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.452 [2024-07-12 15:58:21.551458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:24.452 [2024-07-12 15:58:21.551685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.452 [2024-07-12 15:58:21.551711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c1a40 with addr=10.0.0.2, port=4420 00:21:24.452 [2024-07-12 15:58:21.551728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c1a40 is same with the state(5) to be set 00:21:24.452 [2024-07-12 15:58:21.551796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c1a40 (9): Bad file descriptor 00:21:24.452 [2024-07-12 15:58:21.551855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:24.452 [2024-07-12 15:58:21.551873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:24.452 [2024-07-12 15:58:21.551887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:24.452 [2024-07-12 15:58:21.551943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.452 [2024-07-12 15:58:21.555050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185ca20 (9): Bad file descriptor 00:21:24.452 [2024-07-12 15:58:21.555103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1956040 (9): Bad file descriptor 00:21:24.452 [2024-07-12 15:58:21.555268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.452 [2024-07-12 15:58:21.555293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.555323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.452 [2024-07-12 15:58:21.555340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.555357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.452 [2024-07-12 15:58:21.555371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.555388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.452 [2024-07-12 15:58:21.555402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.555419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.452 [2024-07-12 15:58:21.555434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.555451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.452 [2024-07-12 15:58:21.555465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.555481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.452 [2024-07-12 15:58:21.555496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.452 [2024-07-12 15:58:21.555512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.555975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.555992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.453 [2024-07-12 15:58:21.556639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.453 [2024-07-12 15:58:21.556653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.556670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.556684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.556700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.556714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.556729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.556750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.556767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.556782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.556798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.556812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.556828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.556846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.556862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.556877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.556893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.556907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.556923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.556937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.556953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.556967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.556984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.556999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.557015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.557029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.557045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.557060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.557075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.557090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.557105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.557120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.557136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.557150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.557166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.557180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.557197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.557211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.557230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.557245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.557261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.557275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.557290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18e4f20 is same with the state(5) to be set 00:21:24.454 [2024-07-12 15:58:21.558575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.558620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.558651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.558681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.558712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.558757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.558789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.558820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.558850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.558880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.558916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.558947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.558977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.558992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.559008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.559023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.559050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.559064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.559081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.559096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.559112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.559127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.559143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.559157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.559173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.559187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.559203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.559217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.559233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.559247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.454 [2024-07-12 15:58:21.559263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.454 [2024-07-12 15:58:21.559277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.559971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.559984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.455 [2024-07-12 15:58:21.560555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.455 [2024-07-12 15:58:21.560570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1791550 is same with the state(5) to be set 00:21:24.456 [2024-07-12 15:58:21.561816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.561839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.561861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.561877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.561893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.561906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.561922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.561937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.561952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.561966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.561982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.561995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.562973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.562987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.563003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.563028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.563044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.563057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.563073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.563094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.456 [2024-07-12 15:58:21.563109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.456 [2024-07-12 15:58:21.563123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.563814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.563829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1c230 is same with the state(5) to be set 00:21:24.457 [2024-07-12 15:58:21.565115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.457 [2024-07-12 15:58:21.565524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.457 [2024-07-12 15:58:21.565537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.565978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.565994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.458 [2024-07-12 15:58:21.566838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.458 [2024-07-12 15:58:21.566852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.566868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.566885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.566902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.566916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.566932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.566946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.566961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.566975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.566991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.567005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.567021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.567035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.567051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.567075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.567090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.567104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.567119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c3c50 is same with the state(5) to be set 00:21:24.459 [2024-07-12 15:58:21.568400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.568974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.568988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.459 [2024-07-12 15:58:21.569447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.459 [2024-07-12 15:58:21.569462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.569975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.569991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.570408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.570422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25eb4d0 is same with the state(5) to be set 00:21:24.460 [2024-07-12 15:58:21.572089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:24.460 [2024-07-12 15:58:21.572124] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:24.460 [2024-07-12 15:58:21.572142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:24.460 [2024-07-12 15:58:21.572159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:24.460 [2024-07-12 15:58:21.572269] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.460 [2024-07-12 15:58:21.572402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:24.460 [2024-07-12 15:58:21.572708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.460 [2024-07-12 15:58:21.572754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1961ac0 with addr=10.0.0.2, port=4420 00:21:24.460 [2024-07-12 15:58:21.572773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1961ac0 is same with the state(5) to be set 00:21:24.460 [2024-07-12 15:58:21.572869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.460 [2024-07-12 15:58:21.572894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c51a0 with addr=10.0.0.2, port=4420 00:21:24.460 [2024-07-12 15:58:21.572910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c51a0 is same with the state(5) to be set 00:21:24.460 [2024-07-12 15:58:21.573016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.460 [2024-07-12 15:58:21.573050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1297610 with addr=10.0.0.2, port=4420 00:21:24.460 [2024-07-12 15:58:21.573066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1297610 is same with the state(5) to be set 00:21:24.460 [2024-07-12 15:58:21.573233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.460 [2024-07-12 15:58:21.573257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18665c0 with addr=10.0.0.2, port=4420 00:21:24.460 [2024-07-12 15:58:21.573272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18665c0 is same with the state(5) to be set 00:21:24.460 [2024-07-12 15:58:21.574454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.574480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.574505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.574521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.574538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.574553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.460 [2024-07-12 15:58:21.574569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.460 [2024-07-12 15:58:21.574583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.574977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.574990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.461 [2024-07-12 15:58:21.575689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.461 [2024-07-12 15:58:21.575705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.575719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.575735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.575757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.575773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.575787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.575803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.575817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.575832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.575846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.575862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.575880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.575896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.575910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.575926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.575940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.575956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.575970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.575986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.576420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.576435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x226b7f0 is same with the state(5) to be set 00:21:24.462 [2024-07-12 15:58:21.577722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.577753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.577775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.577791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.577807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.577821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.577837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.577851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.577867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.577881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.577897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.577911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.577931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.577946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.577962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.577976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.577991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.578005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.578021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.578034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.578050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.578064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.578080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.578094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.578110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.578124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.578140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.578153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.578169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.578183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.578199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.578213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.578229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.462 [2024-07-12 15:58:21.578243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.462 [2024-07-12 15:58:21.578258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.578973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.578987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.463 [2024-07-12 15:58:21.579525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.463 [2024-07-12 15:58:21.579538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.464 [2024-07-12 15:58:21.579554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.464 [2024-07-12 15:58:21.579569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.464 [2024-07-12 15:58:21.579585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.464 [2024-07-12 15:58:21.579599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.464 [2024-07-12 15:58:21.579615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.464 [2024-07-12 15:58:21.579629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.464 [2024-07-12 15:58:21.579645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.464 [2024-07-12 15:58:21.579659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.464 [2024-07-12 15:58:21.579674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.464 [2024-07-12 15:58:21.579688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.464 [2024-07-12 15:58:21.579703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2443940 is same with the state(5) to be set 00:21:24.464 [2024-07-12 15:58:21.582292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.464 [2024-07-12 15:58:21.582330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:24.464 [2024-07-12 15:58:21.582349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:24.464 [2024-07-12 15:58:21.582369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:24.464 task offset: 30976 on job bdev=Nvme4n1 fails 00:21:24.464 00:21:24.464 Latency(us) 00:21:24.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.464 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.464 Job: Nvme1n1 ended in about 0.89 seconds with error 00:21:24.464 Verification LBA range: start 0x0 length 0x400 00:21:24.464 Nvme1n1 : 0.89 168.21 10.51 71.77 0.00 263585.38 18932.62 276513.37 00:21:24.464 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.464 Job: Nvme2n1 ended in about 0.90 seconds with error 00:21:24.464 Verification LBA range: start 0x0 length 0x400 00:21:24.464 Nvme2n1 : 0.90 141.69 8.86 70.84 0.00 291546.20 22816.24 260978.92 00:21:24.464 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.464 Job: Nvme3n1 ended in about 0.88 seconds with error 00:21:24.464 Verification LBA range: start 0x0 length 0x400 00:21:24.464 Nvme3n1 : 0.88 216.98 13.56 72.33 0.00 209364.20 11408.12 264085.81 00:21:24.464 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.464 Job: Nvme4n1 ended in about 0.88 seconds with error 00:21:24.464 Verification LBA range: start 0x0 length 0x400 00:21:24.464 Nvme4n1 : 0.88 221.95 13.87 72.47 0.00 201183.85 6990.51 254765.13 00:21:24.464 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.464 Job: Nvme5n1 ended in about 0.91 seconds with error 00:21:24.464 Verification LBA range: start 0x0 length 0x400 00:21:24.464 Nvme5n1 : 0.91 141.18 8.82 70.59 0.00 274258.30 23495.87 265639.25 00:21:24.464 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.464 Job: Nvme6n1 ended in about 0.91 seconds with error 00:21:24.464 Verification LBA range: start 0x0 length 0x400 00:21:24.464 Nvme6n1 : 0.91 145.07 9.07 70.34 0.00 263846.05 21651.15 282727.16 00:21:24.464 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.464 Job: Nvme7n1 ended in about 0.91 seconds with error 00:21:24.464 Verification LBA range: start 0x0 length 0x400 00:21:24.464 Nvme7n1 : 0.91 140.17 8.76 70.08 0.00 264536.37 20194.80 256318.58 00:21:24.464 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.464 Job: Nvme8n1 ended in about 0.92 seconds with error 00:21:24.464 Verification LBA range: start 0x0 length 0x400 00:21:24.464 Nvme8n1 : 0.92 138.75 8.67 69.37 0.00 261794.07 27962.03 253211.69 00:21:24.464 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.464 Job: Nvme9n1 ended in about 0.93 seconds with error 00:21:24.464 Verification LBA range: start 0x0 length 0x400 00:21:24.464 Nvme9n1 : 0.93 143.67 8.98 69.13 0.00 250330.02 19709.35 292047.83 00:21:24.464 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:24.464 Job: Nvme10n1 ended in about 0.92 seconds with error 00:21:24.464 Verification LBA range: start 0x0 length 0x400 00:21:24.464 Nvme10n1 : 0.92 139.66 8.73 69.83 0.00 247933.66 20583.16 262532.36 00:21:24.464 =================================================================================================================== 00:21:24.464 Total : 1597.32 99.83 706.76 0.00 249944.46 6990.51 292047.83 00:21:24.464 [2024-07-12 15:58:21.610005] app.c:1057:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:24.464 [2024-07-12 15:58:21.610109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:24.464 [2024-07-12 15:58:21.610448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.464 [2024-07-12 15:58:21.610486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x194a520 with addr=10.0.0.2, port=4420 00:21:24.464 [2024-07-12 15:58:21.610507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x194a520 is same with the state(5) to be set 00:21:24.464 [2024-07-12 15:58:21.610535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1961ac0 (9): Bad file descriptor 00:21:24.464 [2024-07-12 15:58:21.610559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c51a0 (9): Bad file descriptor 00:21:24.464 [2024-07-12 15:58:21.610578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1297610 (9): Bad file descriptor 00:21:24.464 [2024-07-12 15:58:21.610597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18665c0 (9): Bad file descriptor 00:21:24.464 [2024-07-12 15:58:21.610665] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.464 [2024-07-12 15:58:21.610691] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.464 [2024-07-12 15:58:21.610735] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.464 [2024-07-12 15:58:21.610784] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.464 [2024-07-12 15:58:21.611120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.464 [2024-07-12 15:58:21.611160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1795d90 with addr=10.0.0.2, port=4420 00:21:24.464 [2024-07-12 15:58:21.611177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1795d90 is same with the state(5) to be set 00:21:24.464 [2024-07-12 15:58:21.611356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.464 [2024-07-12 15:58:21.611392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17b9d50 with addr=10.0.0.2, port=4420 00:21:24.464 [2024-07-12 15:58:21.611409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17b9d50 is same with the state(5) to be set 00:21:24.464 [2024-07-12 15:58:21.611618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.464 [2024-07-12 15:58:21.611652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c1a40 with addr=10.0.0.2, port=4420 00:21:24.464 [2024-07-12 15:58:21.611668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c1a40 is same with the state(5) to be set 00:21:24.464 [2024-07-12 15:58:21.611786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.464 [2024-07-12 15:58:21.611813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1956040 with addr=10.0.0.2, port=4420 00:21:24.464 [2024-07-12 15:58:21.611830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956040 is same with the state(5) to be set 00:21:24.464 [2024-07-12 15:58:21.611926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:24.464 [2024-07-12 15:58:21.611962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185ca20 with addr=10.0.0.2, port=4420 00:21:24.464 [2024-07-12 15:58:21.611979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185ca20 is same with the state(5) to be set 00:21:24.464 [2024-07-12 15:58:21.611998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x194a520 (9): Bad file descriptor 00:21:24.464 [2024-07-12 15:58:21.612026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:24.464 [2024-07-12 15:58:21.612040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:24.464 [2024-07-12 15:58:21.612055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:24.464 [2024-07-12 15:58:21.612077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:24.464 [2024-07-12 15:58:21.612092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:24.464 [2024-07-12 15:58:21.612106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:24.464 [2024-07-12 15:58:21.612123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:24.464 [2024-07-12 15:58:21.612137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:24.464 [2024-07-12 15:58:21.612150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:24.464 [2024-07-12 15:58:21.612167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:24.464 [2024-07-12 15:58:21.612181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:24.464 [2024-07-12 15:58:21.612208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:24.464 [2024-07-12 15:58:21.612230] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.464 [2024-07-12 15:58:21.612274] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.464 [2024-07-12 15:58:21.612296] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.464 [2024-07-12 15:58:21.612314] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.464 [2024-07-12 15:58:21.612331] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:24.464 [2024-07-12 15:58:21.612977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.464 [2024-07-12 15:58:21.613002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.464 [2024-07-12 15:58:21.613021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.464 [2024-07-12 15:58:21.613032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.465 [2024-07-12 15:58:21.613049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1795d90 (9): Bad file descriptor 00:21:24.465 [2024-07-12 15:58:21.613068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b9d50 (9): Bad file descriptor 00:21:24.465 [2024-07-12 15:58:21.613087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c1a40 (9): Bad file descriptor 00:21:24.465 [2024-07-12 15:58:21.613105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1956040 (9): Bad file descriptor 00:21:24.465 [2024-07-12 15:58:21.613122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185ca20 (9): Bad file descriptor 00:21:24.465 [2024-07-12 15:58:21.613138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:24.465 [2024-07-12 15:58:21.613151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:24.465 [2024-07-12 15:58:21.613165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:24.465 [2024-07-12 15:58:21.613227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.465 [2024-07-12 15:58:21.613247] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:24.465 [2024-07-12 15:58:21.613260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:24.465 [2024-07-12 15:58:21.613274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.465 [2024-07-12 15:58:21.613290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:24.465 [2024-07-12 15:58:21.613304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:24.465 [2024-07-12 15:58:21.613317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:24.465 [2024-07-12 15:58:21.613333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:24.465 [2024-07-12 15:58:21.613347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:24.465 [2024-07-12 15:58:21.613361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:24.465 [2024-07-12 15:58:21.613377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:24.465 [2024-07-12 15:58:21.613391] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:24.465 [2024-07-12 15:58:21.613403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:24.465 [2024-07-12 15:58:21.613440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:24.465 [2024-07-12 15:58:21.613456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:24.465 [2024-07-12 15:58:21.613469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:24.465 [2024-07-12 15:58:21.613522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.465 [2024-07-12 15:58:21.613541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.465 [2024-07-12 15:58:21.613554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.465 [2024-07-12 15:58:21.613566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:24.465 [2024-07-12 15:58:21.613578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:25.031 15:58:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:25.031 15:58:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 802762 00:21:25.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (802762) - No such process 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:25.966 rmmod nvme_tcp 00:21:25.966 rmmod nvme_fabrics 00:21:25.966 rmmod nvme_keyring 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:25.966 15:58:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.896 15:58:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:27.896 00:21:27.896 real 0m7.617s 00:21:27.896 user 0m19.018s 00:21:27.896 sys 0m1.477s 00:21:27.896 15:58:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:27.896 15:58:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:27.896 ************************************ 00:21:27.896 END TEST nvmf_shutdown_tc3 00:21:27.896 ************************************ 00:21:28.154 15:58:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:28.154 15:58:25 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:28.154 00:21:28.154 real 0m27.995s 00:21:28.154 user 1m19.495s 00:21:28.154 sys 0m6.351s 00:21:28.154 15:58:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:28.154 15:58:25 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:28.154 ************************************ 00:21:28.154 END TEST nvmf_shutdown 00:21:28.154 ************************************ 00:21:28.154 15:58:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:28.154 15:58:25 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:28.154 15:58:25 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:28.154 15:58:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:28.154 15:58:25 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:28.154 15:58:25 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:28.154 15:58:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:28.154 15:58:25 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:28.154 15:58:25 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:28.154 15:58:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:28.154 15:58:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:28.154 15:58:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:28.154 ************************************ 00:21:28.154 START TEST nvmf_multicontroller 00:21:28.154 ************************************ 00:21:28.154 15:58:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:28.154 * Looking for test storage... 00:21:28.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:28.154 15:58:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.154 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:28.154 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.154 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.154 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:28.155 15:58:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.695 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:30.696 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:30.696 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:30.696 Found net devices under 0000:84:00.0: cvl_0_0 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:30.696 Found net devices under 0000:84:00.1: cvl_0_1 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:30.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:21:30.696 00:21:30.696 --- 10.0.0.2 ping statistics --- 00:21:30.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.696 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:21:30.696 00:21:30.696 --- 10.0.0.1 ping statistics --- 00:21:30.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.696 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=805301 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 805301 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 805301 ']' 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.696 [2024-07-12 15:58:27.578238] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:30.696 [2024-07-12 15:58:27.578317] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.696 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.696 [2024-07-12 15:58:27.640691] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:30.696 [2024-07-12 15:58:27.752189] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.696 [2024-07-12 15:58:27.752249] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.696 [2024-07-12 15:58:27.752278] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.696 [2024-07-12 15:58:27.752289] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.696 [2024-07-12 15:58:27.752307] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.696 [2024-07-12 15:58:27.752400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.696 [2024-07-12 15:58:27.752461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.696 [2024-07-12 15:58:27.752464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.696 [2024-07-12 15:58:27.890325] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.696 15:58:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.697 Malloc0 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.697 [2024-07-12 15:58:27.947813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.697 [2024-07-12 15:58:27.955680] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.697 Malloc1 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.697 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.953 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.953 15:58:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:30.953 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.953 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.953 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.953 15:58:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:30.953 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.953 15:58:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.953 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=805443 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 805443 /var/tmp/bdevperf.sock 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 805443 ']' 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.954 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.211 NVMe0n1 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.211 1 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.211 request: 00:21:31.211 { 00:21:31.211 "name": "NVMe0", 00:21:31.211 "trtype": "tcp", 00:21:31.211 "traddr": "10.0.0.2", 00:21:31.211 "adrfam": "ipv4", 00:21:31.211 "trsvcid": "4420", 00:21:31.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.211 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:31.211 "hostaddr": "10.0.0.2", 00:21:31.211 "hostsvcid": "60000", 00:21:31.211 "prchk_reftag": false, 00:21:31.211 "prchk_guard": false, 00:21:31.211 "hdgst": false, 00:21:31.211 "ddgst": false, 00:21:31.211 "method": "bdev_nvme_attach_controller", 00:21:31.211 "req_id": 1 00:21:31.211 } 00:21:31.211 Got JSON-RPC error response 00:21:31.211 response: 00:21:31.211 { 00:21:31.211 "code": -114, 00:21:31.211 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:31.211 } 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.211 request: 00:21:31.211 { 00:21:31.211 "name": "NVMe0", 00:21:31.211 "trtype": "tcp", 00:21:31.211 "traddr": "10.0.0.2", 00:21:31.211 "adrfam": "ipv4", 00:21:31.211 "trsvcid": "4420", 00:21:31.211 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:31.211 "hostaddr": "10.0.0.2", 00:21:31.211 "hostsvcid": "60000", 00:21:31.211 "prchk_reftag": false, 00:21:31.211 "prchk_guard": false, 00:21:31.211 "hdgst": false, 00:21:31.211 "ddgst": false, 00:21:31.211 "method": "bdev_nvme_attach_controller", 00:21:31.211 "req_id": 1 00:21:31.211 } 00:21:31.211 Got JSON-RPC error response 00:21:31.211 response: 00:21:31.211 { 00:21:31.211 "code": -114, 00:21:31.211 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:31.211 } 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.211 request: 00:21:31.211 { 00:21:31.211 "name": "NVMe0", 00:21:31.211 "trtype": "tcp", 00:21:31.211 "traddr": "10.0.0.2", 00:21:31.211 "adrfam": "ipv4", 00:21:31.211 "trsvcid": "4420", 00:21:31.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.211 "hostaddr": "10.0.0.2", 00:21:31.211 "hostsvcid": "60000", 00:21:31.211 "prchk_reftag": false, 00:21:31.211 "prchk_guard": false, 00:21:31.211 "hdgst": false, 00:21:31.211 "ddgst": false, 00:21:31.211 "multipath": "disable", 00:21:31.211 "method": "bdev_nvme_attach_controller", 00:21:31.211 "req_id": 1 00:21:31.211 } 00:21:31.211 Got JSON-RPC error response 00:21:31.211 response: 00:21:31.211 { 00:21:31.211 "code": -114, 00:21:31.211 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:31.211 } 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:31.211 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.212 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.212 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.469 request: 00:21:31.469 { 00:21:31.469 "name": "NVMe0", 00:21:31.469 "trtype": "tcp", 00:21:31.469 "traddr": "10.0.0.2", 00:21:31.469 "adrfam": "ipv4", 00:21:31.469 "trsvcid": "4420", 00:21:31.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.469 "hostaddr": "10.0.0.2", 00:21:31.469 "hostsvcid": "60000", 00:21:31.469 "prchk_reftag": false, 00:21:31.469 "prchk_guard": false, 00:21:31.469 "hdgst": false, 00:21:31.469 "ddgst": false, 00:21:31.469 "multipath": "failover", 00:21:31.469 "method": "bdev_nvme_attach_controller", 00:21:31.469 "req_id": 1 00:21:31.469 } 00:21:31.469 Got JSON-RPC error response 00:21:31.469 response: 00:21:31.469 { 00:21:31.469 "code": -114, 00:21:31.469 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:31.469 } 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.469 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.469 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:31.469 15:58:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:32.838 0 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 805443 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 805443 ']' 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 805443 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 805443 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 805443' 00:21:32.838 killing process with pid 805443 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 805443 00:21:32.838 15:58:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 805443 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:33.095 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:33.095 [2024-07-12 15:58:28.060881] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:33.095 [2024-07-12 15:58:28.060976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805443 ] 00:21:33.095 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.095 [2024-07-12 15:58:28.120075] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.095 [2024-07-12 15:58:28.228888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.095 [2024-07-12 15:58:28.697070] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 44b7ccdf-a014-4d7a-a321-7a2901b76b8b already exists 00:21:33.095 [2024-07-12 15:58:28.697109] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:44b7ccdf-a014-4d7a-a321-7a2901b76b8b alias for bdev NVMe1n1 00:21:33.095 [2024-07-12 15:58:28.697124] bdev_nvme.c:4322:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:33.095 Running I/O for 1 seconds... 00:21:33.095 00:21:33.095 Latency(us) 00:21:33.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.095 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:33.095 NVMe0n1 : 1.00 19135.21 74.75 0.00 0.00 6678.82 3762.25 12039.21 00:21:33.095 =================================================================================================================== 00:21:33.095 Total : 19135.21 74.75 0.00 0.00 6678.82 3762.25 12039.21 00:21:33.095 Received shutdown signal, test time was about 1.000000 seconds 00:21:33.095 00:21:33.095 Latency(us) 00:21:33.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.095 =================================================================================================================== 00:21:33.095 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.095 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:33.095 rmmod nvme_tcp 00:21:33.095 rmmod nvme_fabrics 00:21:33.095 rmmod nvme_keyring 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 805301 ']' 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 805301 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 805301 ']' 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 805301 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 805301 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 805301' 00:21:33.095 killing process with pid 805301 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 805301 00:21:33.095 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 805301 00:21:33.353 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:33.353 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:33.353 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:33.353 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.353 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.353 15:58:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.353 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.353 15:58:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.887 15:58:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:35.887 00:21:35.887 real 0m7.287s 00:21:35.887 user 0m10.985s 00:21:35.887 sys 0m2.359s 00:21:35.887 15:58:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:35.887 15:58:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:35.887 ************************************ 00:21:35.887 END TEST nvmf_multicontroller 00:21:35.887 ************************************ 00:21:35.887 15:58:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:35.887 15:58:32 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:35.887 15:58:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:35.887 15:58:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.887 15:58:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:35.887 ************************************ 00:21:35.887 START TEST nvmf_aer 00:21:35.887 ************************************ 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:35.887 * Looking for test storage... 00:21:35.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.887 15:58:32 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:35.888 15:58:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:37.790 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:37.790 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:21:37.790 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:37.790 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:37.791 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:37.791 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:37.791 Found net devices under 0000:84:00.0: cvl_0_0 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:37.791 Found net devices under 0000:84:00.1: cvl_0_1 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:37.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:21:37.791 00:21:37.791 --- 10.0.0.2 ping statistics --- 00:21:37.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.791 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:21:37.791 00:21:37.791 --- 10.0.0.1 ping statistics --- 00:21:37.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.791 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=807668 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 807668 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 807668 ']' 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.791 15:58:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:37.791 [2024-07-12 15:58:35.039677] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:37.791 [2024-07-12 15:58:35.039787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.791 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.049 [2024-07-12 15:58:35.107050] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.049 [2024-07-12 15:58:35.219683] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.049 [2024-07-12 15:58:35.219762] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.049 [2024-07-12 15:58:35.219779] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.049 [2024-07-12 15:58:35.219795] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.049 [2024-07-12 15:58:35.219805] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.049 [2024-07-12 15:58:35.219857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.049 [2024-07-12 15:58:35.219920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.049 [2024-07-12 15:58:35.219984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.049 [2024-07-12 15:58:35.219987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.306 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.306 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:21:38.306 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.306 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:38.306 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.306 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.306 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:38.306 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.306 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.306 [2024-07-12 15:58:35.376498] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.306 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.306 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:38.306 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.307 Malloc0 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.307 [2024-07-12 15:58:35.430295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.307 [ 00:21:38.307 { 00:21:38.307 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:38.307 "subtype": "Discovery", 00:21:38.307 "listen_addresses": [], 00:21:38.307 "allow_any_host": true, 00:21:38.307 "hosts": [] 00:21:38.307 }, 00:21:38.307 { 00:21:38.307 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.307 "subtype": "NVMe", 00:21:38.307 "listen_addresses": [ 00:21:38.307 { 00:21:38.307 "trtype": "TCP", 00:21:38.307 "adrfam": "IPv4", 00:21:38.307 "traddr": "10.0.0.2", 00:21:38.307 "trsvcid": "4420" 00:21:38.307 } 00:21:38.307 ], 00:21:38.307 "allow_any_host": true, 00:21:38.307 "hosts": [], 00:21:38.307 "serial_number": "SPDK00000000000001", 00:21:38.307 "model_number": "SPDK bdev Controller", 00:21:38.307 "max_namespaces": 2, 00:21:38.307 "min_cntlid": 1, 00:21:38.307 "max_cntlid": 65519, 00:21:38.307 "namespaces": [ 00:21:38.307 { 00:21:38.307 "nsid": 1, 00:21:38.307 "bdev_name": "Malloc0", 00:21:38.307 "name": "Malloc0", 00:21:38.307 "nguid": "DBF828FBA6514D59B9FCAD28ADDE91A1", 00:21:38.307 "uuid": "dbf828fb-a651-4d59-b9fc-ad28adde91a1" 00:21:38.307 } 00:21:38.307 ] 00:21:38.307 } 00:21:38.307 ] 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=807702 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:38.307 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:21:38.307 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 Malloc1 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 [ 00:21:38.565 { 00:21:38.565 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:38.565 "subtype": "Discovery", 00:21:38.565 "listen_addresses": [], 00:21:38.565 "allow_any_host": true, 00:21:38.565 "hosts": [] 00:21:38.565 }, 00:21:38.565 { 00:21:38.565 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.565 "subtype": "NVMe", 00:21:38.565 "listen_addresses": [ 00:21:38.565 { 00:21:38.565 "trtype": "TCP", 00:21:38.565 "adrfam": "IPv4", 00:21:38.565 "traddr": "10.0.0.2", 00:21:38.565 "trsvcid": "4420" 00:21:38.565 } 00:21:38.565 ], 00:21:38.565 "allow_any_host": true, 00:21:38.565 "hosts": [], 00:21:38.565 "serial_number": "SPDK00000000000001", 00:21:38.565 "model_number": "SPDK bdev Controller", 00:21:38.565 "max_namespaces": 2, 00:21:38.565 "min_cntlid": 1, 00:21:38.565 "max_cntlid": 65519, 00:21:38.565 "namespaces": [ 00:21:38.565 { 00:21:38.565 "nsid": 1, 00:21:38.565 "bdev_name": "Malloc0", 00:21:38.565 "name": "Malloc0", 00:21:38.565 "nguid": "DBF828FBA6514D59B9FCAD28ADDE91A1", 00:21:38.565 "uuid": "dbf828fb-a651-4d59-b9fc-ad28adde91a1" 00:21:38.565 }, 00:21:38.565 { 00:21:38.565 "nsid": 2, 00:21:38.565 "bdev_name": "Malloc1", 00:21:38.565 "name": "Malloc1", 00:21:38.565 "nguid": "2F7356E59D13411BB8A5C7BD005BC945", 00:21:38.565 "uuid": "2f7356e5-9d13-411b-b8a5-c7bd005bc945" 00:21:38.565 } 00:21:38.565 ] 00:21:38.565 } 00:21:38.565 ] 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 807702 00:21:38.565 Asynchronous Event Request test 00:21:38.565 Attaching to 10.0.0.2 00:21:38.565 Attached to 10.0.0.2 00:21:38.565 Registering asynchronous event callbacks... 00:21:38.565 Starting namespace attribute notice tests for all controllers... 00:21:38.565 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:38.565 aer_cb - Changed Namespace 00:21:38.565 Cleaning up... 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.565 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:38.565 rmmod nvme_tcp 00:21:38.565 rmmod nvme_fabrics 00:21:38.565 rmmod nvme_keyring 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 807668 ']' 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 807668 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 807668 ']' 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 807668 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 807668 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 807668' 00:21:38.823 killing process with pid 807668 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 807668 00:21:38.823 15:58:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 807668 00:21:39.082 15:58:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:39.082 15:58:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:39.082 15:58:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:39.082 15:58:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:39.082 15:58:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:39.082 15:58:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.082 15:58:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.082 15:58:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.990 15:58:38 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:40.990 00:21:40.990 real 0m5.596s 00:21:40.990 user 0m4.346s 00:21:40.990 sys 0m2.063s 00:21:40.990 15:58:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.990 15:58:38 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:40.990 ************************************ 00:21:40.990 END TEST nvmf_aer 00:21:40.990 ************************************ 00:21:40.990 15:58:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:40.990 15:58:38 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:40.990 15:58:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:40.990 15:58:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.990 15:58:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:40.990 ************************************ 00:21:40.990 START TEST nvmf_async_init 00:21:40.990 ************************************ 00:21:40.990 15:58:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:41.249 * Looking for test storage... 00:21:41.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6c058fb428fa41ae88c468e7cedcc369 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:21:41.249 15:58:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.780 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:43.780 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:21:43.780 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:43.780 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:43.780 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:43.780 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:43.780 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:43.780 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:21:43.780 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:43.780 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:21:43.780 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:43.781 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:43.781 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:43.781 Found net devices under 0000:84:00.0: cvl_0_0 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:43.781 Found net devices under 0000:84:00.1: cvl_0_1 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:43.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:21:43.781 00:21:43.781 --- 10.0.0.2 ping statistics --- 00:21:43.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.781 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:43.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:21:43.781 00:21:43.781 --- 10.0.0.1 ping statistics --- 00:21:43.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.781 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=809763 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 809763 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 809763 ']' 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.781 [2024-07-12 15:58:40.670647] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:43.781 [2024-07-12 15:58:40.670730] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.781 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.781 [2024-07-12 15:58:40.731573] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.781 [2024-07-12 15:58:40.835874] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.781 [2024-07-12 15:58:40.835928] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.781 [2024-07-12 15:58:40.835956] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.781 [2024-07-12 15:58:40.835968] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.781 [2024-07-12 15:58:40.835977] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.781 [2024-07-12 15:58:40.836003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:43.781 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.782 [2024-07-12 15:58:40.974378] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.782 null0 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.782 15:58:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.782 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.782 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6c058fb428fa41ae88c468e7cedcc369 00:21:43.782 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.782 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.782 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.782 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:43.782 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.782 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:43.782 [2024-07-12 15:58:41.014601] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.782 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.782 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:43.782 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.782 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.039 nvme0n1 00:21:44.039 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.039 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:44.039 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.039 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.039 [ 00:21:44.039 { 00:21:44.039 "name": "nvme0n1", 00:21:44.039 "aliases": [ 00:21:44.039 "6c058fb4-28fa-41ae-88c4-68e7cedcc369" 00:21:44.039 ], 00:21:44.039 "product_name": "NVMe disk", 00:21:44.039 "block_size": 512, 00:21:44.039 "num_blocks": 2097152, 00:21:44.039 "uuid": "6c058fb4-28fa-41ae-88c4-68e7cedcc369", 00:21:44.039 "assigned_rate_limits": { 00:21:44.040 "rw_ios_per_sec": 0, 00:21:44.040 "rw_mbytes_per_sec": 0, 00:21:44.040 "r_mbytes_per_sec": 0, 00:21:44.040 "w_mbytes_per_sec": 0 00:21:44.040 }, 00:21:44.040 "claimed": false, 00:21:44.040 "zoned": false, 00:21:44.040 "supported_io_types": { 00:21:44.040 "read": true, 00:21:44.040 "write": true, 00:21:44.040 "unmap": false, 00:21:44.040 "flush": true, 00:21:44.040 "reset": true, 00:21:44.040 "nvme_admin": true, 00:21:44.040 "nvme_io": true, 00:21:44.040 "nvme_io_md": false, 00:21:44.040 "write_zeroes": true, 00:21:44.040 "zcopy": false, 00:21:44.040 "get_zone_info": false, 00:21:44.040 "zone_management": false, 00:21:44.040 "zone_append": false, 00:21:44.040 "compare": true, 00:21:44.040 "compare_and_write": true, 00:21:44.040 "abort": true, 00:21:44.040 "seek_hole": false, 00:21:44.040 "seek_data": false, 00:21:44.040 "copy": true, 00:21:44.040 "nvme_iov_md": false 00:21:44.040 }, 00:21:44.040 "memory_domains": [ 00:21:44.040 { 00:21:44.040 "dma_device_id": "system", 00:21:44.040 "dma_device_type": 1 00:21:44.040 } 00:21:44.040 ], 00:21:44.040 "driver_specific": { 00:21:44.040 "nvme": [ 00:21:44.040 { 00:21:44.040 "trid": { 00:21:44.040 "trtype": "TCP", 00:21:44.040 "adrfam": "IPv4", 00:21:44.040 "traddr": "10.0.0.2", 00:21:44.040 "trsvcid": "4420", 00:21:44.040 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:44.040 }, 00:21:44.040 "ctrlr_data": { 00:21:44.040 "cntlid": 1, 00:21:44.040 "vendor_id": "0x8086", 00:21:44.040 "model_number": "SPDK bdev Controller", 00:21:44.040 "serial_number": "00000000000000000000", 00:21:44.040 "firmware_revision": "24.09", 00:21:44.040 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:44.040 "oacs": { 00:21:44.040 "security": 0, 00:21:44.040 "format": 0, 00:21:44.040 "firmware": 0, 00:21:44.040 "ns_manage": 0 00:21:44.040 }, 00:21:44.040 "multi_ctrlr": true, 00:21:44.040 "ana_reporting": false 00:21:44.040 }, 00:21:44.040 "vs": { 00:21:44.040 "nvme_version": "1.3" 00:21:44.040 }, 00:21:44.040 "ns_data": { 00:21:44.040 "id": 1, 00:21:44.040 "can_share": true 00:21:44.040 } 00:21:44.040 } 00:21:44.040 ], 00:21:44.040 "mp_policy": "active_passive" 00:21:44.040 } 00:21:44.040 } 00:21:44.040 ] 00:21:44.040 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.040 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:44.040 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.040 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.040 [2024-07-12 15:58:41.267839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:44.040 [2024-07-12 15:58:41.267938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbea740 (9): Bad file descriptor 00:21:44.298 [2024-07-12 15:58:41.440878] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.298 [ 00:21:44.298 { 00:21:44.298 "name": "nvme0n1", 00:21:44.298 "aliases": [ 00:21:44.298 "6c058fb4-28fa-41ae-88c4-68e7cedcc369" 00:21:44.298 ], 00:21:44.298 "product_name": "NVMe disk", 00:21:44.298 "block_size": 512, 00:21:44.298 "num_blocks": 2097152, 00:21:44.298 "uuid": "6c058fb4-28fa-41ae-88c4-68e7cedcc369", 00:21:44.298 "assigned_rate_limits": { 00:21:44.298 "rw_ios_per_sec": 0, 00:21:44.298 "rw_mbytes_per_sec": 0, 00:21:44.298 "r_mbytes_per_sec": 0, 00:21:44.298 "w_mbytes_per_sec": 0 00:21:44.298 }, 00:21:44.298 "claimed": false, 00:21:44.298 "zoned": false, 00:21:44.298 "supported_io_types": { 00:21:44.298 "read": true, 00:21:44.298 "write": true, 00:21:44.298 "unmap": false, 00:21:44.298 "flush": true, 00:21:44.298 "reset": true, 00:21:44.298 "nvme_admin": true, 00:21:44.298 "nvme_io": true, 00:21:44.298 "nvme_io_md": false, 00:21:44.298 "write_zeroes": true, 00:21:44.298 "zcopy": false, 00:21:44.298 "get_zone_info": false, 00:21:44.298 "zone_management": false, 00:21:44.298 "zone_append": false, 00:21:44.298 "compare": true, 00:21:44.298 "compare_and_write": true, 00:21:44.298 "abort": true, 00:21:44.298 "seek_hole": false, 00:21:44.298 "seek_data": false, 00:21:44.298 "copy": true, 00:21:44.298 "nvme_iov_md": false 00:21:44.298 }, 00:21:44.298 "memory_domains": [ 00:21:44.298 { 00:21:44.298 "dma_device_id": "system", 00:21:44.298 "dma_device_type": 1 00:21:44.298 } 00:21:44.298 ], 00:21:44.298 "driver_specific": { 00:21:44.298 "nvme": [ 00:21:44.298 { 00:21:44.298 "trid": { 00:21:44.298 "trtype": "TCP", 00:21:44.298 "adrfam": "IPv4", 00:21:44.298 "traddr": "10.0.0.2", 00:21:44.298 "trsvcid": "4420", 00:21:44.298 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:44.298 }, 00:21:44.298 "ctrlr_data": { 00:21:44.298 "cntlid": 2, 00:21:44.298 "vendor_id": "0x8086", 00:21:44.298 "model_number": "SPDK bdev Controller", 00:21:44.298 "serial_number": "00000000000000000000", 00:21:44.298 "firmware_revision": "24.09", 00:21:44.298 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:44.298 "oacs": { 00:21:44.298 "security": 0, 00:21:44.298 "format": 0, 00:21:44.298 "firmware": 0, 00:21:44.298 "ns_manage": 0 00:21:44.298 }, 00:21:44.298 "multi_ctrlr": true, 00:21:44.298 "ana_reporting": false 00:21:44.298 }, 00:21:44.298 "vs": { 00:21:44.298 "nvme_version": "1.3" 00:21:44.298 }, 00:21:44.298 "ns_data": { 00:21:44.298 "id": 1, 00:21:44.298 "can_share": true 00:21:44.298 } 00:21:44.298 } 00:21:44.298 ], 00:21:44.298 "mp_policy": "active_passive" 00:21:44.298 } 00:21:44.298 } 00:21:44.298 ] 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.lMhIfv8yUy 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.lMhIfv8yUy 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.298 [2024-07-12 15:58:41.492609] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:44.298 [2024-07-12 15:58:41.492824] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lMhIfv8yUy 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.298 [2024-07-12 15:58:41.500623] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lMhIfv8yUy 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.298 [2024-07-12 15:58:41.508646] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:44.298 [2024-07-12 15:58:41.508715] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:44.298 nvme0n1 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.298 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.298 [ 00:21:44.298 { 00:21:44.298 "name": "nvme0n1", 00:21:44.298 "aliases": [ 00:21:44.298 "6c058fb4-28fa-41ae-88c4-68e7cedcc369" 00:21:44.298 ], 00:21:44.298 "product_name": "NVMe disk", 00:21:44.298 "block_size": 512, 00:21:44.298 "num_blocks": 2097152, 00:21:44.298 "uuid": "6c058fb4-28fa-41ae-88c4-68e7cedcc369", 00:21:44.298 "assigned_rate_limits": { 00:21:44.298 "rw_ios_per_sec": 0, 00:21:44.298 "rw_mbytes_per_sec": 0, 00:21:44.298 "r_mbytes_per_sec": 0, 00:21:44.556 "w_mbytes_per_sec": 0 00:21:44.556 }, 00:21:44.556 "claimed": false, 00:21:44.556 "zoned": false, 00:21:44.556 "supported_io_types": { 00:21:44.556 "read": true, 00:21:44.556 "write": true, 00:21:44.556 "unmap": false, 00:21:44.556 "flush": true, 00:21:44.556 "reset": true, 00:21:44.556 "nvme_admin": true, 00:21:44.556 "nvme_io": true, 00:21:44.556 "nvme_io_md": false, 00:21:44.556 "write_zeroes": true, 00:21:44.556 "zcopy": false, 00:21:44.556 "get_zone_info": false, 00:21:44.556 "zone_management": false, 00:21:44.556 "zone_append": false, 00:21:44.556 "compare": true, 00:21:44.556 "compare_and_write": true, 00:21:44.556 "abort": true, 00:21:44.556 "seek_hole": false, 00:21:44.556 "seek_data": false, 00:21:44.556 "copy": true, 00:21:44.556 "nvme_iov_md": false 00:21:44.556 }, 00:21:44.556 "memory_domains": [ 00:21:44.556 { 00:21:44.556 "dma_device_id": "system", 00:21:44.556 "dma_device_type": 1 00:21:44.556 } 00:21:44.556 ], 00:21:44.556 "driver_specific": { 00:21:44.556 "nvme": [ 00:21:44.556 { 00:21:44.556 "trid": { 00:21:44.556 "trtype": "TCP", 00:21:44.556 "adrfam": "IPv4", 00:21:44.556 "traddr": "10.0.0.2", 00:21:44.556 "trsvcid": "4421", 00:21:44.556 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:44.556 }, 00:21:44.556 "ctrlr_data": { 00:21:44.556 "cntlid": 3, 00:21:44.556 "vendor_id": "0x8086", 00:21:44.556 "model_number": "SPDK bdev Controller", 00:21:44.556 "serial_number": "00000000000000000000", 00:21:44.556 "firmware_revision": "24.09", 00:21:44.556 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:44.556 "oacs": { 00:21:44.556 "security": 0, 00:21:44.556 "format": 0, 00:21:44.556 "firmware": 0, 00:21:44.556 "ns_manage": 0 00:21:44.556 }, 00:21:44.557 "multi_ctrlr": true, 00:21:44.557 "ana_reporting": false 00:21:44.557 }, 00:21:44.557 "vs": { 00:21:44.557 "nvme_version": "1.3" 00:21:44.557 }, 00:21:44.557 "ns_data": { 00:21:44.557 "id": 1, 00:21:44.557 "can_share": true 00:21:44.557 } 00:21:44.557 } 00:21:44.557 ], 00:21:44.557 "mp_policy": "active_passive" 00:21:44.557 } 00:21:44.557 } 00:21:44.557 ] 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.lMhIfv8yUy 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:44.557 rmmod nvme_tcp 00:21:44.557 rmmod nvme_fabrics 00:21:44.557 rmmod nvme_keyring 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 809763 ']' 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 809763 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 809763 ']' 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 809763 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 809763 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 809763' 00:21:44.557 killing process with pid 809763 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 809763 00:21:44.557 [2024-07-12 15:58:41.701964] app.c:1028:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:44.557 [2024-07-12 15:58:41.702010] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:44.557 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 809763 00:21:44.816 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:44.816 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:44.816 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:44.816 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:44.816 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:44.816 15:58:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.816 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:44.816 15:58:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.720 15:58:43 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:46.720 00:21:46.720 real 0m5.724s 00:21:46.720 user 0m2.200s 00:21:46.720 sys 0m1.931s 00:21:46.720 15:58:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:46.720 15:58:43 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:46.720 ************************************ 00:21:46.720 END TEST nvmf_async_init 00:21:46.720 ************************************ 00:21:46.979 15:58:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:46.979 15:58:44 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:46.979 15:58:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:46.979 15:58:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.979 15:58:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.979 ************************************ 00:21:46.979 START TEST dma 00:21:46.979 ************************************ 00:21:46.979 15:58:44 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:46.979 * Looking for test storage... 00:21:46.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.979 15:58:44 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.979 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.979 15:58:44 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.979 15:58:44 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.979 15:58:44 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.979 15:58:44 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.979 15:58:44 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.980 15:58:44 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.980 15:58:44 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:21:46.980 15:58:44 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.980 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:21:46.980 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.980 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.980 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.980 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.980 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.980 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.980 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.980 15:58:44 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.980 15:58:44 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:46.980 15:58:44 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:21:46.980 00:21:46.980 real 0m0.075s 00:21:46.980 user 0m0.029s 00:21:46.980 sys 0m0.051s 00:21:46.980 15:58:44 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:46.980 15:58:44 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:21:46.980 ************************************ 00:21:46.980 END TEST dma 00:21:46.980 ************************************ 00:21:46.980 15:58:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:46.980 15:58:44 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:46.980 15:58:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:46.980 15:58:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.980 15:58:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.980 ************************************ 00:21:46.980 START TEST nvmf_identify 00:21:46.980 ************************************ 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:46.980 * Looking for test storage... 00:21:46.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.980 15:58:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:49.507 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.507 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.507 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:49.508 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:49.508 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:49.508 Found net devices under 0000:84:00.0: cvl_0_0 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:49.508 Found net devices under 0000:84:00.1: cvl_0_1 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:49.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:21:49.508 00:21:49.508 --- 10.0.0.2 ping statistics --- 00:21:49.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.508 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:21:49.508 00:21:49.508 --- 10.0.0.1 ping statistics --- 00:21:49.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.508 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=811908 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 811908 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 811908 ']' 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:49.508 15:58:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:49.508 [2024-07-12 15:58:46.501596] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:49.508 [2024-07-12 15:58:46.501670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.508 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.508 [2024-07-12 15:58:46.567975] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.508 [2024-07-12 15:58:46.677528] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.508 [2024-07-12 15:58:46.677581] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.509 [2024-07-12 15:58:46.677608] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.509 [2024-07-12 15:58:46.677619] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.509 [2024-07-12 15:58:46.677628] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.509 [2024-07-12 15:58:46.677719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.509 [2024-07-12 15:58:46.677798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.509 [2024-07-12 15:58:46.677801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.509 [2024-07-12 15:58:46.677747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.442 [2024-07-12 15:58:47.492650] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.442 Malloc0 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.442 [2024-07-12 15:58:47.574308] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.442 [ 00:21:50.442 { 00:21:50.442 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:50.442 "subtype": "Discovery", 00:21:50.442 "listen_addresses": [ 00:21:50.442 { 00:21:50.442 "trtype": "TCP", 00:21:50.442 "adrfam": "IPv4", 00:21:50.442 "traddr": "10.0.0.2", 00:21:50.442 "trsvcid": "4420" 00:21:50.442 } 00:21:50.442 ], 00:21:50.442 "allow_any_host": true, 00:21:50.442 "hosts": [] 00:21:50.442 }, 00:21:50.442 { 00:21:50.442 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.442 "subtype": "NVMe", 00:21:50.442 "listen_addresses": [ 00:21:50.442 { 00:21:50.442 "trtype": "TCP", 00:21:50.442 "adrfam": "IPv4", 00:21:50.442 "traddr": "10.0.0.2", 00:21:50.442 "trsvcid": "4420" 00:21:50.442 } 00:21:50.442 ], 00:21:50.442 "allow_any_host": true, 00:21:50.442 "hosts": [], 00:21:50.442 "serial_number": "SPDK00000000000001", 00:21:50.442 "model_number": "SPDK bdev Controller", 00:21:50.442 "max_namespaces": 32, 00:21:50.442 "min_cntlid": 1, 00:21:50.442 "max_cntlid": 65519, 00:21:50.442 "namespaces": [ 00:21:50.442 { 00:21:50.442 "nsid": 1, 00:21:50.442 "bdev_name": "Malloc0", 00:21:50.442 "name": "Malloc0", 00:21:50.442 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:50.442 "eui64": "ABCDEF0123456789", 00:21:50.442 "uuid": "e242ab4d-1e9a-4b54-92c5-dd98f0a26b41" 00:21:50.442 } 00:21:50.442 ] 00:21:50.442 } 00:21:50.442 ] 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.442 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:50.442 [2024-07-12 15:58:47.616850] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:50.442 [2024-07-12 15:58:47.616896] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812061 ] 00:21:50.442 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.442 [2024-07-12 15:58:47.653089] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:50.442 [2024-07-12 15:58:47.653147] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:50.442 [2024-07-12 15:58:47.653157] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:50.442 [2024-07-12 15:58:47.653174] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:50.442 [2024-07-12 15:58:47.653184] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:50.442 [2024-07-12 15:58:47.653970] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:50.442 [2024-07-12 15:58:47.654046] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19266e0 0 00:21:50.442 [2024-07-12 15:58:47.659748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:50.442 [2024-07-12 15:58:47.659774] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:50.442 [2024-07-12 15:58:47.659783] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:50.442 [2024-07-12 15:58:47.659789] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:50.442 [2024-07-12 15:58:47.659832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.442 [2024-07-12 15:58:47.659844] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.442 [2024-07-12 15:58:47.659851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19266e0) 00:21:50.442 [2024-07-12 15:58:47.659868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:50.442 [2024-07-12 15:58:47.659894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986540, cid 0, qid 0 00:21:50.442 [2024-07-12 15:58:47.667752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.442 [2024-07-12 15:58:47.667769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.442 [2024-07-12 15:58:47.667776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.442 [2024-07-12 15:58:47.667784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986540) on tqpair=0x19266e0 00:21:50.442 [2024-07-12 15:58:47.667805] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:50.442 [2024-07-12 15:58:47.667817] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:50.442 [2024-07-12 15:58:47.667826] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:50.442 [2024-07-12 15:58:47.667847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.442 [2024-07-12 15:58:47.667856] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.442 [2024-07-12 15:58:47.667863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19266e0) 00:21:50.442 [2024-07-12 15:58:47.667874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.442 [2024-07-12 15:58:47.667898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986540, cid 0, qid 0 00:21:50.442 [2024-07-12 15:58:47.668086] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.442 [2024-07-12 15:58:47.668101] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.442 [2024-07-12 15:58:47.668107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.442 [2024-07-12 15:58:47.668114] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986540) on tqpair=0x19266e0 00:21:50.442 [2024-07-12 15:58:47.668123] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:50.442 [2024-07-12 15:58:47.668136] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:50.442 [2024-07-12 15:58:47.668147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.442 [2024-07-12 15:58:47.668154] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.442 [2024-07-12 15:58:47.668161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19266e0) 00:21:50.442 [2024-07-12 15:58:47.668171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.442 [2024-07-12 15:58:47.668192] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986540, cid 0, qid 0 00:21:50.442 [2024-07-12 15:58:47.668383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.442 [2024-07-12 15:58:47.668396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.442 [2024-07-12 15:58:47.668402] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.442 [2024-07-12 15:58:47.668409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986540) on tqpair=0x19266e0 00:21:50.443 [2024-07-12 15:58:47.668417] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:50.443 [2024-07-12 15:58:47.668430] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:50.443 [2024-07-12 15:58:47.668442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.668449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.668455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19266e0) 00:21:50.443 [2024-07-12 15:58:47.668465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.443 [2024-07-12 15:58:47.668485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986540, cid 0, qid 0 00:21:50.443 [2024-07-12 15:58:47.668633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.443 [2024-07-12 15:58:47.668646] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.443 [2024-07-12 15:58:47.668652] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.668658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986540) on tqpair=0x19266e0 00:21:50.443 [2024-07-12 15:58:47.668670] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:50.443 [2024-07-12 15:58:47.668687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.668696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.668702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19266e0) 00:21:50.443 [2024-07-12 15:58:47.668712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.443 [2024-07-12 15:58:47.668757] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986540, cid 0, qid 0 00:21:50.443 [2024-07-12 15:58:47.668892] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.443 [2024-07-12 15:58:47.668905] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.443 [2024-07-12 15:58:47.668912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.668918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986540) on tqpair=0x19266e0 00:21:50.443 [2024-07-12 15:58:47.668927] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:50.443 [2024-07-12 15:58:47.668935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:50.443 [2024-07-12 15:58:47.668948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:50.443 [2024-07-12 15:58:47.669058] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:50.443 [2024-07-12 15:58:47.669066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:50.443 [2024-07-12 15:58:47.669080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.669087] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.669093] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19266e0) 00:21:50.443 [2024-07-12 15:58:47.669104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.443 [2024-07-12 15:58:47.669124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986540, cid 0, qid 0 00:21:50.443 [2024-07-12 15:58:47.669306] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.443 [2024-07-12 15:58:47.669319] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.443 [2024-07-12 15:58:47.669325] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.669332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986540) on tqpair=0x19266e0 00:21:50.443 [2024-07-12 15:58:47.669340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:50.443 [2024-07-12 15:58:47.669356] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.669364] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.669370] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19266e0) 00:21:50.443 [2024-07-12 15:58:47.669381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.443 [2024-07-12 15:58:47.669401] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986540, cid 0, qid 0 00:21:50.443 [2024-07-12 15:58:47.669508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.443 [2024-07-12 15:58:47.669520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.443 [2024-07-12 15:58:47.669530] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.669537] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986540) on tqpair=0x19266e0 00:21:50.443 [2024-07-12 15:58:47.669545] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:50.443 [2024-07-12 15:58:47.669553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:50.443 [2024-07-12 15:58:47.669566] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:50.443 [2024-07-12 15:58:47.669579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:50.443 [2024-07-12 15:58:47.669594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.669602] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19266e0) 00:21:50.443 [2024-07-12 15:58:47.669612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.443 [2024-07-12 15:58:47.669633] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986540, cid 0, qid 0 00:21:50.443 [2024-07-12 15:58:47.669810] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.443 [2024-07-12 15:58:47.669825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.443 [2024-07-12 15:58:47.669831] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.669838] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19266e0): datao=0, datal=4096, cccid=0 00:21:50.443 [2024-07-12 15:58:47.669846] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1986540) on tqpair(0x19266e0): expected_datao=0, payload_size=4096 00:21:50.443 [2024-07-12 15:58:47.669854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.669904] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.669913] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.443 [2024-07-12 15:58:47.670032] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.443 [2024-07-12 15:58:47.670038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986540) on tqpair=0x19266e0 00:21:50.443 [2024-07-12 15:58:47.670071] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:50.443 [2024-07-12 15:58:47.670079] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:50.443 [2024-07-12 15:58:47.670086] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:50.443 [2024-07-12 15:58:47.670094] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:50.443 [2024-07-12 15:58:47.670102] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:50.443 [2024-07-12 15:58:47.670109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:50.443 [2024-07-12 15:58:47.670124] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:50.443 [2024-07-12 15:58:47.670140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670149] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19266e0) 00:21:50.443 [2024-07-12 15:58:47.670169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:50.443 [2024-07-12 15:58:47.670191] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986540, cid 0, qid 0 00:21:50.443 [2024-07-12 15:58:47.670348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.443 [2024-07-12 15:58:47.670361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.443 [2024-07-12 15:58:47.670368] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670374] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986540) on tqpair=0x19266e0 00:21:50.443 [2024-07-12 15:58:47.670385] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670392] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19266e0) 00:21:50.443 [2024-07-12 15:58:47.670408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.443 [2024-07-12 15:58:47.670418] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19266e0) 00:21:50.443 [2024-07-12 15:58:47.670439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.443 [2024-07-12 15:58:47.670448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670460] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19266e0) 00:21:50.443 [2024-07-12 15:58:47.670468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.443 [2024-07-12 15:58:47.670478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670484] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670490] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19266e0) 00:21:50.443 [2024-07-12 15:58:47.670498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.443 [2024-07-12 15:58:47.670507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:50.443 [2024-07-12 15:58:47.670525] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:50.443 [2024-07-12 15:58:47.670538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.443 [2024-07-12 15:58:47.670544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19266e0) 00:21:50.444 [2024-07-12 15:58:47.670554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.444 [2024-07-12 15:58:47.670576] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986540, cid 0, qid 0 00:21:50.444 [2024-07-12 15:58:47.670586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19866c0, cid 1, qid 0 00:21:50.444 [2024-07-12 15:58:47.670593] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986840, cid 2, qid 0 00:21:50.444 [2024-07-12 15:58:47.670600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19869c0, cid 3, qid 0 00:21:50.444 [2024-07-12 15:58:47.670607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986b40, cid 4, qid 0 00:21:50.444 [2024-07-12 15:58:47.670840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.444 [2024-07-12 15:58:47.670855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.444 [2024-07-12 15:58:47.670865] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.670872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986b40) on tqpair=0x19266e0 00:21:50.444 [2024-07-12 15:58:47.670881] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:50.444 [2024-07-12 15:58:47.670890] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:50.444 [2024-07-12 15:58:47.670908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.670917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19266e0) 00:21:50.444 [2024-07-12 15:58:47.670927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.444 [2024-07-12 15:58:47.670948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986b40, cid 4, qid 0 00:21:50.444 [2024-07-12 15:58:47.671131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.444 [2024-07-12 15:58:47.671146] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.444 [2024-07-12 15:58:47.671152] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671158] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19266e0): datao=0, datal=4096, cccid=4 00:21:50.444 [2024-07-12 15:58:47.671165] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1986b40) on tqpair(0x19266e0): expected_datao=0, payload_size=4096 00:21:50.444 [2024-07-12 15:58:47.671172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671182] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671189] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671201] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.444 [2024-07-12 15:58:47.671210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.444 [2024-07-12 15:58:47.671216] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671222] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986b40) on tqpair=0x19266e0 00:21:50.444 [2024-07-12 15:58:47.671240] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:50.444 [2024-07-12 15:58:47.671274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671285] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19266e0) 00:21:50.444 [2024-07-12 15:58:47.671295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.444 [2024-07-12 15:58:47.671306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671313] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19266e0) 00:21:50.444 [2024-07-12 15:58:47.671328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.444 [2024-07-12 15:58:47.671353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986b40, cid 4, qid 0 00:21:50.444 [2024-07-12 15:58:47.671364] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986cc0, cid 5, qid 0 00:21:50.444 [2024-07-12 15:58:47.671543] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.444 [2024-07-12 15:58:47.671556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.444 [2024-07-12 15:58:47.671563] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671569] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19266e0): datao=0, datal=1024, cccid=4 00:21:50.444 [2024-07-12 15:58:47.671580] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1986b40) on tqpair(0x19266e0): expected_datao=0, payload_size=1024 00:21:50.444 [2024-07-12 15:58:47.671588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671597] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671604] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671612] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.444 [2024-07-12 15:58:47.671621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.444 [2024-07-12 15:58:47.671627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.671633] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986cc0) on tqpair=0x19266e0 00:21:50.444 [2024-07-12 15:58:47.714764] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.444 [2024-07-12 15:58:47.714782] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.444 [2024-07-12 15:58:47.714789] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.714796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986b40) on tqpair=0x19266e0 00:21:50.444 [2024-07-12 15:58:47.714819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.714830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19266e0) 00:21:50.444 [2024-07-12 15:58:47.714841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.444 [2024-07-12 15:58:47.714872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986b40, cid 4, qid 0 00:21:50.444 [2024-07-12 15:58:47.715153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.444 [2024-07-12 15:58:47.715165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.444 [2024-07-12 15:58:47.715172] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.715178] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19266e0): datao=0, datal=3072, cccid=4 00:21:50.444 [2024-07-12 15:58:47.715185] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1986b40) on tqpair(0x19266e0): expected_datao=0, payload_size=3072 00:21:50.444 [2024-07-12 15:58:47.715192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.715210] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.715218] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.715314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.444 [2024-07-12 15:58:47.715327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.444 [2024-07-12 15:58:47.715334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.715340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986b40) on tqpair=0x19266e0 00:21:50.444 [2024-07-12 15:58:47.715354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.715363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19266e0) 00:21:50.444 [2024-07-12 15:58:47.715373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.444 [2024-07-12 15:58:47.715400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1986b40, cid 4, qid 0 00:21:50.444 [2024-07-12 15:58:47.715559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.444 [2024-07-12 15:58:47.715572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.444 [2024-07-12 15:58:47.715578] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.715584] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19266e0): datao=0, datal=8, cccid=4 00:21:50.444 [2024-07-12 15:58:47.715592] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1986b40) on tqpair(0x19266e0): expected_datao=0, payload_size=8 00:21:50.444 [2024-07-12 15:58:47.715605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.715615] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.444 [2024-07-12 15:58:47.715622] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.704 [2024-07-12 15:58:47.756895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.704 [2024-07-12 15:58:47.756914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.704 [2024-07-12 15:58:47.756921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.704 [2024-07-12 15:58:47.756928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986b40) on tqpair=0x19266e0 00:21:50.704 ===================================================== 00:21:50.704 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:50.704 ===================================================== 00:21:50.704 Controller Capabilities/Features 00:21:50.704 ================================ 00:21:50.704 Vendor ID: 0000 00:21:50.704 Subsystem Vendor ID: 0000 00:21:50.704 Serial Number: .................... 00:21:50.704 Model Number: ........................................ 00:21:50.704 Firmware Version: 24.09 00:21:50.704 Recommended Arb Burst: 0 00:21:50.704 IEEE OUI Identifier: 00 00 00 00:21:50.704 Multi-path I/O 00:21:50.704 May have multiple subsystem ports: No 00:21:50.704 May have multiple controllers: No 00:21:50.704 Associated with SR-IOV VF: No 00:21:50.704 Max Data Transfer Size: 131072 00:21:50.704 Max Number of Namespaces: 0 00:21:50.704 Max Number of I/O Queues: 1024 00:21:50.704 NVMe Specification Version (VS): 1.3 00:21:50.704 NVMe Specification Version (Identify): 1.3 00:21:50.704 Maximum Queue Entries: 128 00:21:50.704 Contiguous Queues Required: Yes 00:21:50.704 Arbitration Mechanisms Supported 00:21:50.704 Weighted Round Robin: Not Supported 00:21:50.704 Vendor Specific: Not Supported 00:21:50.704 Reset Timeout: 15000 ms 00:21:50.704 Doorbell Stride: 4 bytes 00:21:50.704 NVM Subsystem Reset: Not Supported 00:21:50.704 Command Sets Supported 00:21:50.704 NVM Command Set: Supported 00:21:50.704 Boot Partition: Not Supported 00:21:50.704 Memory Page Size Minimum: 4096 bytes 00:21:50.704 Memory Page Size Maximum: 4096 bytes 00:21:50.704 Persistent Memory Region: Not Supported 00:21:50.704 Optional Asynchronous Events Supported 00:21:50.704 Namespace Attribute Notices: Not Supported 00:21:50.704 Firmware Activation Notices: Not Supported 00:21:50.704 ANA Change Notices: Not Supported 00:21:50.704 PLE Aggregate Log Change Notices: Not Supported 00:21:50.704 LBA Status Info Alert Notices: Not Supported 00:21:50.704 EGE Aggregate Log Change Notices: Not Supported 00:21:50.704 Normal NVM Subsystem Shutdown event: Not Supported 00:21:50.704 Zone Descriptor Change Notices: Not Supported 00:21:50.704 Discovery Log Change Notices: Supported 00:21:50.704 Controller Attributes 00:21:50.704 128-bit Host Identifier: Not Supported 00:21:50.704 Non-Operational Permissive Mode: Not Supported 00:21:50.704 NVM Sets: Not Supported 00:21:50.704 Read Recovery Levels: Not Supported 00:21:50.704 Endurance Groups: Not Supported 00:21:50.704 Predictable Latency Mode: Not Supported 00:21:50.704 Traffic Based Keep ALive: Not Supported 00:21:50.704 Namespace Granularity: Not Supported 00:21:50.704 SQ Associations: Not Supported 00:21:50.704 UUID List: Not Supported 00:21:50.704 Multi-Domain Subsystem: Not Supported 00:21:50.704 Fixed Capacity Management: Not Supported 00:21:50.704 Variable Capacity Management: Not Supported 00:21:50.704 Delete Endurance Group: Not Supported 00:21:50.704 Delete NVM Set: Not Supported 00:21:50.704 Extended LBA Formats Supported: Not Supported 00:21:50.704 Flexible Data Placement Supported: Not Supported 00:21:50.704 00:21:50.704 Controller Memory Buffer Support 00:21:50.704 ================================ 00:21:50.704 Supported: No 00:21:50.704 00:21:50.704 Persistent Memory Region Support 00:21:50.704 ================================ 00:21:50.704 Supported: No 00:21:50.704 00:21:50.704 Admin Command Set Attributes 00:21:50.704 ============================ 00:21:50.704 Security Send/Receive: Not Supported 00:21:50.704 Format NVM: Not Supported 00:21:50.704 Firmware Activate/Download: Not Supported 00:21:50.704 Namespace Management: Not Supported 00:21:50.704 Device Self-Test: Not Supported 00:21:50.704 Directives: Not Supported 00:21:50.704 NVMe-MI: Not Supported 00:21:50.704 Virtualization Management: Not Supported 00:21:50.704 Doorbell Buffer Config: Not Supported 00:21:50.704 Get LBA Status Capability: Not Supported 00:21:50.704 Command & Feature Lockdown Capability: Not Supported 00:21:50.704 Abort Command Limit: 1 00:21:50.704 Async Event Request Limit: 4 00:21:50.704 Number of Firmware Slots: N/A 00:21:50.704 Firmware Slot 1 Read-Only: N/A 00:21:50.704 Firmware Activation Without Reset: N/A 00:21:50.704 Multiple Update Detection Support: N/A 00:21:50.704 Firmware Update Granularity: No Information Provided 00:21:50.704 Per-Namespace SMART Log: No 00:21:50.704 Asymmetric Namespace Access Log Page: Not Supported 00:21:50.704 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:50.704 Command Effects Log Page: Not Supported 00:21:50.704 Get Log Page Extended Data: Supported 00:21:50.704 Telemetry Log Pages: Not Supported 00:21:50.704 Persistent Event Log Pages: Not Supported 00:21:50.704 Supported Log Pages Log Page: May Support 00:21:50.704 Commands Supported & Effects Log Page: Not Supported 00:21:50.704 Feature Identifiers & Effects Log Page:May Support 00:21:50.704 NVMe-MI Commands & Effects Log Page: May Support 00:21:50.704 Data Area 4 for Telemetry Log: Not Supported 00:21:50.704 Error Log Page Entries Supported: 128 00:21:50.704 Keep Alive: Not Supported 00:21:50.704 00:21:50.704 NVM Command Set Attributes 00:21:50.704 ========================== 00:21:50.704 Submission Queue Entry Size 00:21:50.704 Max: 1 00:21:50.704 Min: 1 00:21:50.704 Completion Queue Entry Size 00:21:50.704 Max: 1 00:21:50.704 Min: 1 00:21:50.704 Number of Namespaces: 0 00:21:50.704 Compare Command: Not Supported 00:21:50.704 Write Uncorrectable Command: Not Supported 00:21:50.704 Dataset Management Command: Not Supported 00:21:50.704 Write Zeroes Command: Not Supported 00:21:50.704 Set Features Save Field: Not Supported 00:21:50.704 Reservations: Not Supported 00:21:50.704 Timestamp: Not Supported 00:21:50.704 Copy: Not Supported 00:21:50.704 Volatile Write Cache: Not Present 00:21:50.704 Atomic Write Unit (Normal): 1 00:21:50.704 Atomic Write Unit (PFail): 1 00:21:50.704 Atomic Compare & Write Unit: 1 00:21:50.704 Fused Compare & Write: Supported 00:21:50.704 Scatter-Gather List 00:21:50.704 SGL Command Set: Supported 00:21:50.704 SGL Keyed: Supported 00:21:50.704 SGL Bit Bucket Descriptor: Not Supported 00:21:50.704 SGL Metadata Pointer: Not Supported 00:21:50.704 Oversized SGL: Not Supported 00:21:50.704 SGL Metadata Address: Not Supported 00:21:50.704 SGL Offset: Supported 00:21:50.704 Transport SGL Data Block: Not Supported 00:21:50.704 Replay Protected Memory Block: Not Supported 00:21:50.704 00:21:50.704 Firmware Slot Information 00:21:50.704 ========================= 00:21:50.704 Active slot: 0 00:21:50.704 00:21:50.704 00:21:50.704 Error Log 00:21:50.704 ========= 00:21:50.704 00:21:50.704 Active Namespaces 00:21:50.704 ================= 00:21:50.704 Discovery Log Page 00:21:50.704 ================== 00:21:50.704 Generation Counter: 2 00:21:50.704 Number of Records: 2 00:21:50.704 Record Format: 0 00:21:50.704 00:21:50.704 Discovery Log Entry 0 00:21:50.704 ---------------------- 00:21:50.704 Transport Type: 3 (TCP) 00:21:50.704 Address Family: 1 (IPv4) 00:21:50.704 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:50.704 Entry Flags: 00:21:50.704 Duplicate Returned Information: 1 00:21:50.704 Explicit Persistent Connection Support for Discovery: 1 00:21:50.704 Transport Requirements: 00:21:50.704 Secure Channel: Not Required 00:21:50.704 Port ID: 0 (0x0000) 00:21:50.705 Controller ID: 65535 (0xffff) 00:21:50.705 Admin Max SQ Size: 128 00:21:50.705 Transport Service Identifier: 4420 00:21:50.705 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:50.705 Transport Address: 10.0.0.2 00:21:50.705 Discovery Log Entry 1 00:21:50.705 ---------------------- 00:21:50.705 Transport Type: 3 (TCP) 00:21:50.705 Address Family: 1 (IPv4) 00:21:50.705 Subsystem Type: 2 (NVM Subsystem) 00:21:50.705 Entry Flags: 00:21:50.705 Duplicate Returned Information: 0 00:21:50.705 Explicit Persistent Connection Support for Discovery: 0 00:21:50.705 Transport Requirements: 00:21:50.705 Secure Channel: Not Required 00:21:50.705 Port ID: 0 (0x0000) 00:21:50.705 Controller ID: 65535 (0xffff) 00:21:50.705 Admin Max SQ Size: 128 00:21:50.705 Transport Service Identifier: 4420 00:21:50.705 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:50.705 Transport Address: 10.0.0.2 [2024-07-12 15:58:47.757048] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:50.705 [2024-07-12 15:58:47.757070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986540) on tqpair=0x19266e0 00:21:50.705 [2024-07-12 15:58:47.757081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.705 [2024-07-12 15:58:47.757090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19866c0) on tqpair=0x19266e0 00:21:50.705 [2024-07-12 15:58:47.757097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.705 [2024-07-12 15:58:47.757105] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1986840) on tqpair=0x19266e0 00:21:50.705 [2024-07-12 15:58:47.757112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.705 [2024-07-12 15:58:47.757120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19869c0) on tqpair=0x19266e0 00:21:50.705 [2024-07-12 15:58:47.757128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.705 [2024-07-12 15:58:47.757141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.757148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.757155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19266e0) 00:21:50.705 [2024-07-12 15:58:47.757166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.705 [2024-07-12 15:58:47.757190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19869c0, cid 3, qid 0 00:21:50.705 [2024-07-12 15:58:47.757373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.705 [2024-07-12 15:58:47.757387] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.705 [2024-07-12 15:58:47.757393] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.757399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19869c0) on tqpair=0x19266e0 00:21:50.705 [2024-07-12 15:58:47.757410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.757418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.757424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19266e0) 00:21:50.705 [2024-07-12 15:58:47.757434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.705 [2024-07-12 15:58:47.757460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19869c0, cid 3, qid 0 00:21:50.705 [2024-07-12 15:58:47.757651] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.705 [2024-07-12 15:58:47.757663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.705 [2024-07-12 15:58:47.757669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.757675] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19869c0) on tqpair=0x19266e0 00:21:50.705 [2024-07-12 15:58:47.757683] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:50.705 [2024-07-12 15:58:47.757695] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:50.705 [2024-07-12 15:58:47.757712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.757735] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.761756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19266e0) 00:21:50.705 [2024-07-12 15:58:47.761769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.705 [2024-07-12 15:58:47.761793] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19869c0, cid 3, qid 0 00:21:50.705 [2024-07-12 15:58:47.761946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.705 [2024-07-12 15:58:47.761960] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.705 [2024-07-12 15:58:47.761967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.761973] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19869c0) on tqpair=0x19266e0 00:21:50.705 [2024-07-12 15:58:47.761987] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:21:50.705 00:21:50.705 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:50.705 [2024-07-12 15:58:47.798144] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:50.705 [2024-07-12 15:58:47.798198] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid812063 ] 00:21:50.705 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.705 [2024-07-12 15:58:47.832160] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:50.705 [2024-07-12 15:58:47.832212] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:50.705 [2024-07-12 15:58:47.832222] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:50.705 [2024-07-12 15:58:47.832237] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:50.705 [2024-07-12 15:58:47.832246] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:50.705 [2024-07-12 15:58:47.832697] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:50.705 [2024-07-12 15:58:47.832759] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x14c06e0 0 00:21:50.705 [2024-07-12 15:58:47.838745] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:50.705 [2024-07-12 15:58:47.838784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:50.705 [2024-07-12 15:58:47.838793] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:50.705 [2024-07-12 15:58:47.838799] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:50.705 [2024-07-12 15:58:47.838832] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.838843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.838850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14c06e0) 00:21:50.705 [2024-07-12 15:58:47.838864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:50.705 [2024-07-12 15:58:47.838895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520540, cid 0, qid 0 00:21:50.705 [2024-07-12 15:58:47.846759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.705 [2024-07-12 15:58:47.846776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.705 [2024-07-12 15:58:47.846784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.846791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520540) on tqpair=0x14c06e0 00:21:50.705 [2024-07-12 15:58:47.846809] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:50.705 [2024-07-12 15:58:47.846821] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:50.705 [2024-07-12 15:58:47.846830] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:50.705 [2024-07-12 15:58:47.846848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.846857] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.846864] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14c06e0) 00:21:50.705 [2024-07-12 15:58:47.846876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.705 [2024-07-12 15:58:47.846900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520540, cid 0, qid 0 00:21:50.705 [2024-07-12 15:58:47.847048] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.705 [2024-07-12 15:58:47.847060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.705 [2024-07-12 15:58:47.847067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.847074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520540) on tqpair=0x14c06e0 00:21:50.705 [2024-07-12 15:58:47.847082] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:50.705 [2024-07-12 15:58:47.847096] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:50.705 [2024-07-12 15:58:47.847108] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.847115] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.847121] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14c06e0) 00:21:50.705 [2024-07-12 15:58:47.847132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.705 [2024-07-12 15:58:47.847153] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520540, cid 0, qid 0 00:21:50.705 [2024-07-12 15:58:47.847244] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.705 [2024-07-12 15:58:47.847258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.705 [2024-07-12 15:58:47.847265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.847271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520540) on tqpair=0x14c06e0 00:21:50.705 [2024-07-12 15:58:47.847279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:50.705 [2024-07-12 15:58:47.847293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:50.705 [2024-07-12 15:58:47.847305] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.847312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.705 [2024-07-12 15:58:47.847318] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14c06e0) 00:21:50.706 [2024-07-12 15:58:47.847329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.706 [2024-07-12 15:58:47.847350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520540, cid 0, qid 0 00:21:50.706 [2024-07-12 15:58:47.847433] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.706 [2024-07-12 15:58:47.847445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.706 [2024-07-12 15:58:47.847452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.847459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520540) on tqpair=0x14c06e0 00:21:50.706 [2024-07-12 15:58:47.847467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:50.706 [2024-07-12 15:58:47.847484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.847493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.847499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14c06e0) 00:21:50.706 [2024-07-12 15:58:47.847510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.706 [2024-07-12 15:58:47.847531] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520540, cid 0, qid 0 00:21:50.706 [2024-07-12 15:58:47.847609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.706 [2024-07-12 15:58:47.847621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.706 [2024-07-12 15:58:47.847628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.847635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520540) on tqpair=0x14c06e0 00:21:50.706 [2024-07-12 15:58:47.847642] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:50.706 [2024-07-12 15:58:47.847650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:50.706 [2024-07-12 15:58:47.847663] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:50.706 [2024-07-12 15:58:47.847773] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:50.706 [2024-07-12 15:58:47.847783] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:50.706 [2024-07-12 15:58:47.847795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.847803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.847810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14c06e0) 00:21:50.706 [2024-07-12 15:58:47.847820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.706 [2024-07-12 15:58:47.847842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520540, cid 0, qid 0 00:21:50.706 [2024-07-12 15:58:47.847974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.706 [2024-07-12 15:58:47.847988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.706 [2024-07-12 15:58:47.847995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520540) on tqpair=0x14c06e0 00:21:50.706 [2024-07-12 15:58:47.848010] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:50.706 [2024-07-12 15:58:47.848028] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848037] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14c06e0) 00:21:50.706 [2024-07-12 15:58:47.848070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.706 [2024-07-12 15:58:47.848091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520540, cid 0, qid 0 00:21:50.706 [2024-07-12 15:58:47.848174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.706 [2024-07-12 15:58:47.848187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.706 [2024-07-12 15:58:47.848194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520540) on tqpair=0x14c06e0 00:21:50.706 [2024-07-12 15:58:47.848208] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:50.706 [2024-07-12 15:58:47.848216] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:50.706 [2024-07-12 15:58:47.848230] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:50.706 [2024-07-12 15:58:47.848246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:50.706 [2024-07-12 15:58:47.848260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848268] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14c06e0) 00:21:50.706 [2024-07-12 15:58:47.848279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.706 [2024-07-12 15:58:47.848300] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520540, cid 0, qid 0 00:21:50.706 [2024-07-12 15:58:47.848439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.706 [2024-07-12 15:58:47.848453] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.706 [2024-07-12 15:58:47.848460] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848467] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14c06e0): datao=0, datal=4096, cccid=0 00:21:50.706 [2024-07-12 15:58:47.848474] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1520540) on tqpair(0x14c06e0): expected_datao=0, payload_size=4096 00:21:50.706 [2024-07-12 15:58:47.848482] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848492] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848500] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.706 [2024-07-12 15:58:47.848534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.706 [2024-07-12 15:58:47.848541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520540) on tqpair=0x14c06e0 00:21:50.706 [2024-07-12 15:58:47.848558] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:50.706 [2024-07-12 15:58:47.848567] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:50.706 [2024-07-12 15:58:47.848574] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:50.706 [2024-07-12 15:58:47.848581] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:50.706 [2024-07-12 15:58:47.848588] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:50.706 [2024-07-12 15:58:47.848596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:50.706 [2024-07-12 15:58:47.848610] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:50.706 [2024-07-12 15:58:47.848626] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848635] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14c06e0) 00:21:50.706 [2024-07-12 15:58:47.848656] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:50.706 [2024-07-12 15:58:47.848677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520540, cid 0, qid 0 00:21:50.706 [2024-07-12 15:58:47.848857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.706 [2024-07-12 15:58:47.848871] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.706 [2024-07-12 15:58:47.848878] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848885] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520540) on tqpair=0x14c06e0 00:21:50.706 [2024-07-12 15:58:47.848896] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848904] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x14c06e0) 00:21:50.706 [2024-07-12 15:58:47.848920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.706 [2024-07-12 15:58:47.848930] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x14c06e0) 00:21:50.706 [2024-07-12 15:58:47.848953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.706 [2024-07-12 15:58:47.848962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848969] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.848976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x14c06e0) 00:21:50.706 [2024-07-12 15:58:47.848984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.706 [2024-07-12 15:58:47.848994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.849001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.849007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14c06e0) 00:21:50.706 [2024-07-12 15:58:47.849016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.706 [2024-07-12 15:58:47.849040] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:50.706 [2024-07-12 15:58:47.849059] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:50.706 [2024-07-12 15:58:47.849072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.849079] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14c06e0) 00:21:50.706 [2024-07-12 15:58:47.849089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.706 [2024-07-12 15:58:47.849112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520540, cid 0, qid 0 00:21:50.706 [2024-07-12 15:58:47.849123] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15206c0, cid 1, qid 0 00:21:50.706 [2024-07-12 15:58:47.849130] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520840, cid 2, qid 0 00:21:50.706 [2024-07-12 15:58:47.849138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15209c0, cid 3, qid 0 00:21:50.706 [2024-07-12 15:58:47.849145] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520b40, cid 4, qid 0 00:21:50.706 [2024-07-12 15:58:47.849330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.706 [2024-07-12 15:58:47.849344] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.706 [2024-07-12 15:58:47.849351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.706 [2024-07-12 15:58:47.849358] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520b40) on tqpair=0x14c06e0 00:21:50.706 [2024-07-12 15:58:47.849366] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:50.707 [2024-07-12 15:58:47.849374] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.849392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.849403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.849413] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.849420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.849427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14c06e0) 00:21:50.707 [2024-07-12 15:58:47.849437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:50.707 [2024-07-12 15:58:47.849458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520b40, cid 4, qid 0 00:21:50.707 [2024-07-12 15:58:47.849588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.707 [2024-07-12 15:58:47.849599] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.707 [2024-07-12 15:58:47.849606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.849613] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520b40) on tqpair=0x14c06e0 00:21:50.707 [2024-07-12 15:58:47.849677] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.849696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.849710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.849733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14c06e0) 00:21:50.707 [2024-07-12 15:58:47.849754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.707 [2024-07-12 15:58:47.849777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520b40, cid 4, qid 0 00:21:50.707 [2024-07-12 15:58:47.849920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.707 [2024-07-12 15:58:47.849934] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.707 [2024-07-12 15:58:47.849941] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.849948] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14c06e0): datao=0, datal=4096, cccid=4 00:21:50.707 [2024-07-12 15:58:47.849956] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1520b40) on tqpair(0x14c06e0): expected_datao=0, payload_size=4096 00:21:50.707 [2024-07-12 15:58:47.849963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.849980] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.849989] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.850066] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.707 [2024-07-12 15:58:47.850079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.707 [2024-07-12 15:58:47.850086] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.850096] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520b40) on tqpair=0x14c06e0 00:21:50.707 [2024-07-12 15:58:47.850111] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:50.707 [2024-07-12 15:58:47.850131] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.850148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.850161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.850169] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14c06e0) 00:21:50.707 [2024-07-12 15:58:47.850179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.707 [2024-07-12 15:58:47.850201] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520b40, cid 4, qid 0 00:21:50.707 [2024-07-12 15:58:47.850348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.707 [2024-07-12 15:58:47.850362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.707 [2024-07-12 15:58:47.850368] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.850375] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14c06e0): datao=0, datal=4096, cccid=4 00:21:50.707 [2024-07-12 15:58:47.850382] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1520b40) on tqpair(0x14c06e0): expected_datao=0, payload_size=4096 00:21:50.707 [2024-07-12 15:58:47.850390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.850406] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.850415] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.850438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.707 [2024-07-12 15:58:47.850448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.707 [2024-07-12 15:58:47.850455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.850462] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520b40) on tqpair=0x14c06e0 00:21:50.707 [2024-07-12 15:58:47.850481] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.850499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.850513] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.850521] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14c06e0) 00:21:50.707 [2024-07-12 15:58:47.850531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.707 [2024-07-12 15:58:47.850553] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520b40, cid 4, qid 0 00:21:50.707 [2024-07-12 15:58:47.850646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.707 [2024-07-12 15:58:47.850659] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.707 [2024-07-12 15:58:47.850666] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.850672] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14c06e0): datao=0, datal=4096, cccid=4 00:21:50.707 [2024-07-12 15:58:47.850679] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1520b40) on tqpair(0x14c06e0): expected_datao=0, payload_size=4096 00:21:50.707 [2024-07-12 15:58:47.850687] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.850703] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.850730] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.854754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.707 [2024-07-12 15:58:47.854769] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.707 [2024-07-12 15:58:47.854776] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.854784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520b40) on tqpair=0x14c06e0 00:21:50.707 [2024-07-12 15:58:47.854797] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.854813] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.854828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.854839] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.854848] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.854856] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.854864] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:50.707 [2024-07-12 15:58:47.854872] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:50.707 [2024-07-12 15:58:47.854881] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:50.707 [2024-07-12 15:58:47.854898] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.854907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14c06e0) 00:21:50.707 [2024-07-12 15:58:47.854918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.707 [2024-07-12 15:58:47.854929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.854937] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.707 [2024-07-12 15:58:47.854943] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14c06e0) 00:21:50.707 [2024-07-12 15:58:47.854952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.707 [2024-07-12 15:58:47.854979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520b40, cid 4, qid 0 00:21:50.708 [2024-07-12 15:58:47.854991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520cc0, cid 5, qid 0 00:21:50.708 [2024-07-12 15:58:47.855126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.708 [2024-07-12 15:58:47.855141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.708 [2024-07-12 15:58:47.855147] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.855154] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520b40) on tqpair=0x14c06e0 00:21:50.708 [2024-07-12 15:58:47.855165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.708 [2024-07-12 15:58:47.855174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.708 [2024-07-12 15:58:47.855180] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.855187] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520cc0) on tqpair=0x14c06e0 00:21:50.708 [2024-07-12 15:58:47.855203] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.855213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14c06e0) 00:21:50.708 [2024-07-12 15:58:47.855227] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.708 [2024-07-12 15:58:47.855249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520cc0, cid 5, qid 0 00:21:50.708 [2024-07-12 15:58:47.855386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.708 [2024-07-12 15:58:47.855397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.708 [2024-07-12 15:58:47.855404] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.855411] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520cc0) on tqpair=0x14c06e0 00:21:50.708 [2024-07-12 15:58:47.855426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.855435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14c06e0) 00:21:50.708 [2024-07-12 15:58:47.855446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.708 [2024-07-12 15:58:47.855467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520cc0, cid 5, qid 0 00:21:50.708 [2024-07-12 15:58:47.855596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.708 [2024-07-12 15:58:47.855608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.708 [2024-07-12 15:58:47.855615] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.855622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520cc0) on tqpair=0x14c06e0 00:21:50.708 [2024-07-12 15:58:47.855637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.855646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14c06e0) 00:21:50.708 [2024-07-12 15:58:47.855656] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.708 [2024-07-12 15:58:47.855676] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520cc0, cid 5, qid 0 00:21:50.708 [2024-07-12 15:58:47.855806] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.708 [2024-07-12 15:58:47.855820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.708 [2024-07-12 15:58:47.855827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.855834] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520cc0) on tqpair=0x14c06e0 00:21:50.708 [2024-07-12 15:58:47.855858] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.855870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x14c06e0) 00:21:50.708 [2024-07-12 15:58:47.855881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.708 [2024-07-12 15:58:47.855894] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.855902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x14c06e0) 00:21:50.708 [2024-07-12 15:58:47.855911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.708 [2024-07-12 15:58:47.855923] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.855931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x14c06e0) 00:21:50.708 [2024-07-12 15:58:47.855940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.708 [2024-07-12 15:58:47.855952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.855960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x14c06e0) 00:21:50.708 [2024-07-12 15:58:47.855969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.708 [2024-07-12 15:58:47.855996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520cc0, cid 5, qid 0 00:21:50.708 [2024-07-12 15:58:47.856008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520b40, cid 4, qid 0 00:21:50.708 [2024-07-12 15:58:47.856029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520e40, cid 6, qid 0 00:21:50.708 [2024-07-12 15:58:47.856037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520fc0, cid 7, qid 0 00:21:50.708 [2024-07-12 15:58:47.856223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.708 [2024-07-12 15:58:47.856237] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.708 [2024-07-12 15:58:47.856244] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856250] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14c06e0): datao=0, datal=8192, cccid=5 00:21:50.708 [2024-07-12 15:58:47.856258] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1520cc0) on tqpair(0x14c06e0): expected_datao=0, payload_size=8192 00:21:50.708 [2024-07-12 15:58:47.856265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856287] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856297] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.708 [2024-07-12 15:58:47.856314] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.708 [2024-07-12 15:58:47.856320] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856327] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14c06e0): datao=0, datal=512, cccid=4 00:21:50.708 [2024-07-12 15:58:47.856334] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1520b40) on tqpair(0x14c06e0): expected_datao=0, payload_size=512 00:21:50.708 [2024-07-12 15:58:47.856341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856350] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856357] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.708 [2024-07-12 15:58:47.856374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.708 [2024-07-12 15:58:47.856380] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856386] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14c06e0): datao=0, datal=512, cccid=6 00:21:50.708 [2024-07-12 15:58:47.856394] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1520e40) on tqpair(0x14c06e0): expected_datao=0, payload_size=512 00:21:50.708 [2024-07-12 15:58:47.856401] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856410] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856417] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:50.708 [2024-07-12 15:58:47.856434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:50.708 [2024-07-12 15:58:47.856440] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856446] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x14c06e0): datao=0, datal=4096, cccid=7 00:21:50.708 [2024-07-12 15:58:47.856453] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1520fc0) on tqpair(0x14c06e0): expected_datao=0, payload_size=4096 00:21:50.708 [2024-07-12 15:58:47.856461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856470] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856477] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:50.708 [2024-07-12 15:58:47.856492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.708 [2024-07-12 15:58:47.856502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.709 [2024-07-12 15:58:47.856508] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.709 [2024-07-12 15:58:47.856515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520cc0) on tqpair=0x14c06e0 00:21:50.709 [2024-07-12 15:58:47.856533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.709 [2024-07-12 15:58:47.856543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.709 [2024-07-12 15:58:47.856550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.709 [2024-07-12 15:58:47.856557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520b40) on tqpair=0x14c06e0 00:21:50.709 [2024-07-12 15:58:47.856572] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.709 [2024-07-12 15:58:47.856582] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.709 [2024-07-12 15:58:47.856588] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.709 [2024-07-12 15:58:47.856595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520e40) on tqpair=0x14c06e0 00:21:50.709 [2024-07-12 15:58:47.856606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.709 [2024-07-12 15:58:47.856615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.709 [2024-07-12 15:58:47.856621] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.709 [2024-07-12 15:58:47.856628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520fc0) on tqpair=0x14c06e0 00:21:50.709 ===================================================== 00:21:50.709 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:50.709 ===================================================== 00:21:50.709 Controller Capabilities/Features 00:21:50.709 ================================ 00:21:50.709 Vendor ID: 8086 00:21:50.709 Subsystem Vendor ID: 8086 00:21:50.709 Serial Number: SPDK00000000000001 00:21:50.709 Model Number: SPDK bdev Controller 00:21:50.709 Firmware Version: 24.09 00:21:50.709 Recommended Arb Burst: 6 00:21:50.709 IEEE OUI Identifier: e4 d2 5c 00:21:50.709 Multi-path I/O 00:21:50.709 May have multiple subsystem ports: Yes 00:21:50.709 May have multiple controllers: Yes 00:21:50.709 Associated with SR-IOV VF: No 00:21:50.709 Max Data Transfer Size: 131072 00:21:50.709 Max Number of Namespaces: 32 00:21:50.709 Max Number of I/O Queues: 127 00:21:50.709 NVMe Specification Version (VS): 1.3 00:21:50.709 NVMe Specification Version (Identify): 1.3 00:21:50.709 Maximum Queue Entries: 128 00:21:50.709 Contiguous Queues Required: Yes 00:21:50.709 Arbitration Mechanisms Supported 00:21:50.709 Weighted Round Robin: Not Supported 00:21:50.709 Vendor Specific: Not Supported 00:21:50.709 Reset Timeout: 15000 ms 00:21:50.709 Doorbell Stride: 4 bytes 00:21:50.709 NVM Subsystem Reset: Not Supported 00:21:50.709 Command Sets Supported 00:21:50.709 NVM Command Set: Supported 00:21:50.709 Boot Partition: Not Supported 00:21:50.709 Memory Page Size Minimum: 4096 bytes 00:21:50.709 Memory Page Size Maximum: 4096 bytes 00:21:50.709 Persistent Memory Region: Not Supported 00:21:50.709 Optional Asynchronous Events Supported 00:21:50.709 Namespace Attribute Notices: Supported 00:21:50.709 Firmware Activation Notices: Not Supported 00:21:50.709 ANA Change Notices: Not Supported 00:21:50.709 PLE Aggregate Log Change Notices: Not Supported 00:21:50.709 LBA Status Info Alert Notices: Not Supported 00:21:50.709 EGE Aggregate Log Change Notices: Not Supported 00:21:50.709 Normal NVM Subsystem Shutdown event: Not Supported 00:21:50.709 Zone Descriptor Change Notices: Not Supported 00:21:50.709 Discovery Log Change Notices: Not Supported 00:21:50.709 Controller Attributes 00:21:50.709 128-bit Host Identifier: Supported 00:21:50.709 Non-Operational Permissive Mode: Not Supported 00:21:50.709 NVM Sets: Not Supported 00:21:50.709 Read Recovery Levels: Not Supported 00:21:50.709 Endurance Groups: Not Supported 00:21:50.709 Predictable Latency Mode: Not Supported 00:21:50.709 Traffic Based Keep ALive: Not Supported 00:21:50.709 Namespace Granularity: Not Supported 00:21:50.709 SQ Associations: Not Supported 00:21:50.709 UUID List: Not Supported 00:21:50.709 Multi-Domain Subsystem: Not Supported 00:21:50.709 Fixed Capacity Management: Not Supported 00:21:50.709 Variable Capacity Management: Not Supported 00:21:50.709 Delete Endurance Group: Not Supported 00:21:50.709 Delete NVM Set: Not Supported 00:21:50.709 Extended LBA Formats Supported: Not Supported 00:21:50.709 Flexible Data Placement Supported: Not Supported 00:21:50.709 00:21:50.709 Controller Memory Buffer Support 00:21:50.709 ================================ 00:21:50.709 Supported: No 00:21:50.709 00:21:50.709 Persistent Memory Region Support 00:21:50.709 ================================ 00:21:50.709 Supported: No 00:21:50.709 00:21:50.709 Admin Command Set Attributes 00:21:50.709 ============================ 00:21:50.709 Security Send/Receive: Not Supported 00:21:50.709 Format NVM: Not Supported 00:21:50.709 Firmware Activate/Download: Not Supported 00:21:50.709 Namespace Management: Not Supported 00:21:50.709 Device Self-Test: Not Supported 00:21:50.709 Directives: Not Supported 00:21:50.709 NVMe-MI: Not Supported 00:21:50.709 Virtualization Management: Not Supported 00:21:50.709 Doorbell Buffer Config: Not Supported 00:21:50.709 Get LBA Status Capability: Not Supported 00:21:50.709 Command & Feature Lockdown Capability: Not Supported 00:21:50.709 Abort Command Limit: 4 00:21:50.709 Async Event Request Limit: 4 00:21:50.709 Number of Firmware Slots: N/A 00:21:50.709 Firmware Slot 1 Read-Only: N/A 00:21:50.709 Firmware Activation Without Reset: N/A 00:21:50.709 Multiple Update Detection Support: N/A 00:21:50.709 Firmware Update Granularity: No Information Provided 00:21:50.709 Per-Namespace SMART Log: No 00:21:50.709 Asymmetric Namespace Access Log Page: Not Supported 00:21:50.709 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:50.709 Command Effects Log Page: Supported 00:21:50.709 Get Log Page Extended Data: Supported 00:21:50.709 Telemetry Log Pages: Not Supported 00:21:50.709 Persistent Event Log Pages: Not Supported 00:21:50.709 Supported Log Pages Log Page: May Support 00:21:50.709 Commands Supported & Effects Log Page: Not Supported 00:21:50.709 Feature Identifiers & Effects Log Page:May Support 00:21:50.709 NVMe-MI Commands & Effects Log Page: May Support 00:21:50.709 Data Area 4 for Telemetry Log: Not Supported 00:21:50.709 Error Log Page Entries Supported: 128 00:21:50.709 Keep Alive: Supported 00:21:50.709 Keep Alive Granularity: 10000 ms 00:21:50.709 00:21:50.709 NVM Command Set Attributes 00:21:50.709 ========================== 00:21:50.709 Submission Queue Entry Size 00:21:50.709 Max: 64 00:21:50.709 Min: 64 00:21:50.709 Completion Queue Entry Size 00:21:50.709 Max: 16 00:21:50.709 Min: 16 00:21:50.709 Number of Namespaces: 32 00:21:50.709 Compare Command: Supported 00:21:50.709 Write Uncorrectable Command: Not Supported 00:21:50.709 Dataset Management Command: Supported 00:21:50.709 Write Zeroes Command: Supported 00:21:50.709 Set Features Save Field: Not Supported 00:21:50.709 Reservations: Supported 00:21:50.709 Timestamp: Not Supported 00:21:50.709 Copy: Supported 00:21:50.709 Volatile Write Cache: Present 00:21:50.709 Atomic Write Unit (Normal): 1 00:21:50.709 Atomic Write Unit (PFail): 1 00:21:50.709 Atomic Compare & Write Unit: 1 00:21:50.709 Fused Compare & Write: Supported 00:21:50.709 Scatter-Gather List 00:21:50.709 SGL Command Set: Supported 00:21:50.709 SGL Keyed: Supported 00:21:50.709 SGL Bit Bucket Descriptor: Not Supported 00:21:50.709 SGL Metadata Pointer: Not Supported 00:21:50.709 Oversized SGL: Not Supported 00:21:50.709 SGL Metadata Address: Not Supported 00:21:50.709 SGL Offset: Supported 00:21:50.709 Transport SGL Data Block: Not Supported 00:21:50.709 Replay Protected Memory Block: Not Supported 00:21:50.709 00:21:50.709 Firmware Slot Information 00:21:50.709 ========================= 00:21:50.709 Active slot: 1 00:21:50.709 Slot 1 Firmware Revision: 24.09 00:21:50.709 00:21:50.709 00:21:50.709 Commands Supported and Effects 00:21:50.709 ============================== 00:21:50.709 Admin Commands 00:21:50.709 -------------- 00:21:50.709 Get Log Page (02h): Supported 00:21:50.709 Identify (06h): Supported 00:21:50.709 Abort (08h): Supported 00:21:50.710 Set Features (09h): Supported 00:21:50.710 Get Features (0Ah): Supported 00:21:50.710 Asynchronous Event Request (0Ch): Supported 00:21:50.710 Keep Alive (18h): Supported 00:21:50.710 I/O Commands 00:21:50.710 ------------ 00:21:50.710 Flush (00h): Supported LBA-Change 00:21:50.710 Write (01h): Supported LBA-Change 00:21:50.710 Read (02h): Supported 00:21:50.710 Compare (05h): Supported 00:21:50.710 Write Zeroes (08h): Supported LBA-Change 00:21:50.710 Dataset Management (09h): Supported LBA-Change 00:21:50.710 Copy (19h): Supported LBA-Change 00:21:50.710 00:21:50.710 Error Log 00:21:50.710 ========= 00:21:50.710 00:21:50.710 Arbitration 00:21:50.710 =========== 00:21:50.710 Arbitration Burst: 1 00:21:50.710 00:21:50.710 Power Management 00:21:50.710 ================ 00:21:50.710 Number of Power States: 1 00:21:50.710 Current Power State: Power State #0 00:21:50.710 Power State #0: 00:21:50.710 Max Power: 0.00 W 00:21:50.710 Non-Operational State: Operational 00:21:50.710 Entry Latency: Not Reported 00:21:50.710 Exit Latency: Not Reported 00:21:50.710 Relative Read Throughput: 0 00:21:50.710 Relative Read Latency: 0 00:21:50.710 Relative Write Throughput: 0 00:21:50.710 Relative Write Latency: 0 00:21:50.710 Idle Power: Not Reported 00:21:50.710 Active Power: Not Reported 00:21:50.710 Non-Operational Permissive Mode: Not Supported 00:21:50.710 00:21:50.710 Health Information 00:21:50.710 ================== 00:21:50.710 Critical Warnings: 00:21:50.710 Available Spare Space: OK 00:21:50.710 Temperature: OK 00:21:50.710 Device Reliability: OK 00:21:50.710 Read Only: No 00:21:50.710 Volatile Memory Backup: OK 00:21:50.710 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:50.710 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:50.710 Available Spare: 0% 00:21:50.710 Available Spare Threshold: 0% 00:21:50.710 Life Percentage Used:[2024-07-12 15:58:47.856734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.856768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x14c06e0) 00:21:50.710 [2024-07-12 15:58:47.856780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.710 [2024-07-12 15:58:47.856803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1520fc0, cid 7, qid 0 00:21:50.710 [2024-07-12 15:58:47.856938] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.710 [2024-07-12 15:58:47.856952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.710 [2024-07-12 15:58:47.856959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.856966] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520fc0) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.857008] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:50.710 [2024-07-12 15:58:47.857027] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520540) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.857037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.710 [2024-07-12 15:58:47.857061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15206c0) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.857069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.710 [2024-07-12 15:58:47.857077] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1520840) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.857085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.710 [2024-07-12 15:58:47.857093] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15209c0) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.857100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.710 [2024-07-12 15:58:47.857113] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14c06e0) 00:21:50.710 [2024-07-12 15:58:47.857141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.710 [2024-07-12 15:58:47.857164] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15209c0, cid 3, qid 0 00:21:50.710 [2024-07-12 15:58:47.857281] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.710 [2024-07-12 15:58:47.857294] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.710 [2024-07-12 15:58:47.857301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15209c0) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.857318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14c06e0) 00:21:50.710 [2024-07-12 15:58:47.857343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.710 [2024-07-12 15:58:47.857369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15209c0, cid 3, qid 0 00:21:50.710 [2024-07-12 15:58:47.857481] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.710 [2024-07-12 15:58:47.857492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.710 [2024-07-12 15:58:47.857499] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15209c0) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.857513] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:50.710 [2024-07-12 15:58:47.857521] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:50.710 [2024-07-12 15:58:47.857536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14c06e0) 00:21:50.710 [2024-07-12 15:58:47.857562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.710 [2024-07-12 15:58:47.857582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15209c0, cid 3, qid 0 00:21:50.710 [2024-07-12 15:58:47.857676] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.710 [2024-07-12 15:58:47.857689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.710 [2024-07-12 15:58:47.857696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15209c0) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.857733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857751] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857759] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14c06e0) 00:21:50.710 [2024-07-12 15:58:47.857770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.710 [2024-07-12 15:58:47.857791] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15209c0, cid 3, qid 0 00:21:50.710 [2024-07-12 15:58:47.857923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.710 [2024-07-12 15:58:47.857937] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.710 [2024-07-12 15:58:47.857944] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857951] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15209c0) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.857969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.857989] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14c06e0) 00:21:50.710 [2024-07-12 15:58:47.858000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.710 [2024-07-12 15:58:47.858021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15209c0, cid 3, qid 0 00:21:50.710 [2024-07-12 15:58:47.858147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.710 [2024-07-12 15:58:47.858161] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.710 [2024-07-12 15:58:47.858167] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.858174] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15209c0) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.858190] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.858199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.858206] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14c06e0) 00:21:50.710 [2024-07-12 15:58:47.858216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.710 [2024-07-12 15:58:47.858237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15209c0, cid 3, qid 0 00:21:50.710 [2024-07-12 15:58:47.858326] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.710 [2024-07-12 15:58:47.858339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.710 [2024-07-12 15:58:47.858346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.858353] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15209c0) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.858369] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.858378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.858384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14c06e0) 00:21:50.710 [2024-07-12 15:58:47.858394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.710 [2024-07-12 15:58:47.858415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15209c0, cid 3, qid 0 00:21:50.710 [2024-07-12 15:58:47.858528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.710 [2024-07-12 15:58:47.858541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.710 [2024-07-12 15:58:47.858548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.858555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15209c0) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.858571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.858580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.858586] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14c06e0) 00:21:50.710 [2024-07-12 15:58:47.858597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.710 [2024-07-12 15:58:47.858617] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15209c0, cid 3, qid 0 00:21:50.710 [2024-07-12 15:58:47.862748] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.710 [2024-07-12 15:58:47.862766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.710 [2024-07-12 15:58:47.862773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.862780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15209c0) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.862799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.862809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.862819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x14c06e0) 00:21:50.710 [2024-07-12 15:58:47.862831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.710 [2024-07-12 15:58:47.862854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15209c0, cid 3, qid 0 00:21:50.710 [2024-07-12 15:58:47.862973] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:50.710 [2024-07-12 15:58:47.862987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:50.710 [2024-07-12 15:58:47.862994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:50.710 [2024-07-12 15:58:47.863001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15209c0) on tqpair=0x14c06e0 00:21:50.710 [2024-07-12 15:58:47.863015] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:21:50.710 0% 00:21:50.710 Data Units Read: 0 00:21:50.710 Data Units Written: 0 00:21:50.710 Host Read Commands: 0 00:21:50.711 Host Write Commands: 0 00:21:50.711 Controller Busy Time: 0 minutes 00:21:50.711 Power Cycles: 0 00:21:50.711 Power On Hours: 0 hours 00:21:50.711 Unsafe Shutdowns: 0 00:21:50.711 Unrecoverable Media Errors: 0 00:21:50.711 Lifetime Error Log Entries: 0 00:21:50.711 Warning Temperature Time: 0 minutes 00:21:50.711 Critical Temperature Time: 0 minutes 00:21:50.711 00:21:50.711 Number of Queues 00:21:50.711 ================ 00:21:50.711 Number of I/O Submission Queues: 127 00:21:50.711 Number of I/O Completion Queues: 127 00:21:50.711 00:21:50.711 Active Namespaces 00:21:50.711 ================= 00:21:50.711 Namespace ID:1 00:21:50.711 Error Recovery Timeout: Unlimited 00:21:50.711 Command Set Identifier: NVM (00h) 00:21:50.711 Deallocate: Supported 00:21:50.711 Deallocated/Unwritten Error: Not Supported 00:21:50.711 Deallocated Read Value: Unknown 00:21:50.711 Deallocate in Write Zeroes: Not Supported 00:21:50.711 Deallocated Guard Field: 0xFFFF 00:21:50.711 Flush: Supported 00:21:50.711 Reservation: Supported 00:21:50.711 Namespace Sharing Capabilities: Multiple Controllers 00:21:50.711 Size (in LBAs): 131072 (0GiB) 00:21:50.711 Capacity (in LBAs): 131072 (0GiB) 00:21:50.711 Utilization (in LBAs): 131072 (0GiB) 00:21:50.711 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:50.711 EUI64: ABCDEF0123456789 00:21:50.711 UUID: e242ab4d-1e9a-4b54-92c5-dd98f0a26b41 00:21:50.711 Thin Provisioning: Not Supported 00:21:50.711 Per-NS Atomic Units: Yes 00:21:50.711 Atomic Boundary Size (Normal): 0 00:21:50.711 Atomic Boundary Size (PFail): 0 00:21:50.711 Atomic Boundary Offset: 0 00:21:50.711 Maximum Single Source Range Length: 65535 00:21:50.711 Maximum Copy Length: 65535 00:21:50.711 Maximum Source Range Count: 1 00:21:50.711 NGUID/EUI64 Never Reused: No 00:21:50.711 Namespace Write Protected: No 00:21:50.711 Number of LBA Formats: 1 00:21:50.711 Current LBA Format: LBA Format #00 00:21:50.711 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:50.711 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:50.711 rmmod nvme_tcp 00:21:50.711 rmmod nvme_fabrics 00:21:50.711 rmmod nvme_keyring 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 811908 ']' 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 811908 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 811908 ']' 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 811908 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 811908 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 811908' 00:21:50.711 killing process with pid 811908 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 811908 00:21:50.711 15:58:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 811908 00:21:50.968 15:58:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:50.968 15:58:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:50.968 15:58:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:50.968 15:58:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.968 15:58:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.968 15:58:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.968 15:58:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.968 15:58:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.499 15:58:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:53.499 00:21:53.499 real 0m6.118s 00:21:53.499 user 0m7.111s 00:21:53.499 sys 0m1.952s 00:21:53.499 15:58:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:53.499 15:58:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:53.499 ************************************ 00:21:53.499 END TEST nvmf_identify 00:21:53.499 ************************************ 00:21:53.499 15:58:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:53.499 15:58:50 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:53.499 15:58:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:53.499 15:58:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:53.499 15:58:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:53.499 ************************************ 00:21:53.499 START TEST nvmf_perf 00:21:53.499 ************************************ 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:53.499 * Looking for test storage... 00:21:53.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:53.499 15:58:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:55.402 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:55.402 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:55.402 Found net devices under 0000:84:00.0: cvl_0_0 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:55.402 Found net devices under 0000:84:00.1: cvl_0_1 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:21:55.402 00:21:55.402 --- 10.0.0.2 ping statistics --- 00:21:55.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.402 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:21:55.402 00:21:55.402 --- 10.0.0.1 ping statistics --- 00:21:55.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.402 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=814014 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 814014 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 814014 ']' 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:55.402 15:58:52 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:55.661 [2024-07-12 15:58:52.708620] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:21:55.661 [2024-07-12 15:58:52.708716] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.661 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.661 [2024-07-12 15:58:52.772294] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.661 [2024-07-12 15:58:52.874845] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.661 [2024-07-12 15:58:52.874892] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.661 [2024-07-12 15:58:52.874907] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.661 [2024-07-12 15:58:52.874919] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.661 [2024-07-12 15:58:52.874930] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.661 [2024-07-12 15:58:52.875012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.661 [2024-07-12 15:58:52.875085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.661 [2024-07-12 15:58:52.875152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.661 [2024-07-12 15:58:52.875155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.925 15:58:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.925 15:58:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:21:55.925 15:58:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.925 15:58:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:55.925 15:58:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:55.925 15:58:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.925 15:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:55.925 15:58:53 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:59.270 15:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:59.270 15:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:59.270 15:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:21:59.270 15:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:59.527 15:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:59.527 15:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:21:59.527 15:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:59.527 15:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:59.527 15:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:59.784 [2024-07-12 15:58:56.977135] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.784 15:58:56 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:00.041 15:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:00.041 15:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:00.298 15:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:00.298 15:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:00.556 15:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.813 [2024-07-12 15:58:57.968732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.813 15:58:57 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:01.070 15:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:22:01.070 15:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:22:01.070 15:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:01.070 15:58:58 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:22:02.442 Initializing NVMe Controllers 00:22:02.442 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:22:02.442 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:22:02.442 Initialization complete. Launching workers. 00:22:02.442 ======================================================== 00:22:02.442 Latency(us) 00:22:02.442 Device Information : IOPS MiB/s Average min max 00:22:02.442 PCIE (0000:82:00.0) NSID 1 from core 0: 83524.80 326.27 378.44 27.40 15293.15 00:22:02.442 ======================================================== 00:22:02.442 Total : 83524.80 326.27 378.44 27.40 15293.15 00:22:02.442 00:22:02.442 15:58:59 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:02.442 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.374 Initializing NVMe Controllers 00:22:03.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:03.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:03.374 Initialization complete. Launching workers. 00:22:03.374 ======================================================== 00:22:03.374 Latency(us) 00:22:03.374 Device Information : IOPS MiB/s Average min max 00:22:03.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 111.61 0.44 8954.02 134.14 44847.73 00:22:03.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.80 0.22 17905.49 7121.80 48855.31 00:22:03.374 ======================================================== 00:22:03.374 Total : 167.41 0.65 11937.84 134.14 48855.31 00:22:03.374 00:22:03.374 15:59:00 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:03.631 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.998 Initializing NVMe Controllers 00:22:04.998 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:04.998 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:04.998 Initialization complete. Launching workers. 00:22:04.998 ======================================================== 00:22:04.998 Latency(us) 00:22:04.998 Device Information : IOPS MiB/s Average min max 00:22:04.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8342.96 32.59 3820.19 568.02 25001.84 00:22:04.998 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3797.42 14.83 8294.65 5066.12 22835.48 00:22:04.998 ======================================================== 00:22:04.998 Total : 12140.38 47.42 5219.77 568.02 25001.84 00:22:04.998 00:22:04.998 15:59:01 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:04.998 15:59:01 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:04.998 15:59:01 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:04.998 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.521 Initializing NVMe Controllers 00:22:07.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:07.521 Controller IO queue size 128, less than required. 00:22:07.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.521 Controller IO queue size 128, less than required. 00:22:07.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:07.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:07.521 Initialization complete. Launching workers. 00:22:07.521 ======================================================== 00:22:07.521 Latency(us) 00:22:07.521 Device Information : IOPS MiB/s Average min max 00:22:07.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1456.40 364.10 87828.02 54564.62 172108.69 00:22:07.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 573.28 143.32 225334.29 132269.01 320777.31 00:22:07.521 ======================================================== 00:22:07.521 Total : 2029.67 507.42 126666.44 54564.62 320777.31 00:22:07.521 00:22:07.521 15:59:04 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:07.521 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.778 No valid NVMe controllers or AIO or URING devices found 00:22:07.778 Initializing NVMe Controllers 00:22:07.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:07.778 Controller IO queue size 128, less than required. 00:22:07.778 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.778 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:07.778 Controller IO queue size 128, less than required. 00:22:07.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:07.779 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:07.779 WARNING: Some requested NVMe devices were skipped 00:22:07.779 15:59:04 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:07.779 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.305 Initializing NVMe Controllers 00:22:10.305 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:10.305 Controller IO queue size 128, less than required. 00:22:10.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:10.305 Controller IO queue size 128, less than required. 00:22:10.305 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:10.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:10.305 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:10.305 Initialization complete. Launching workers. 00:22:10.305 00:22:10.305 ==================== 00:22:10.305 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:10.305 TCP transport: 00:22:10.305 polls: 9326 00:22:10.305 idle_polls: 6580 00:22:10.305 sock_completions: 2746 00:22:10.305 nvme_completions: 4833 00:22:10.305 submitted_requests: 7258 00:22:10.305 queued_requests: 1 00:22:10.305 00:22:10.305 ==================== 00:22:10.305 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:10.305 TCP transport: 00:22:10.305 polls: 12025 00:22:10.305 idle_polls: 9023 00:22:10.305 sock_completions: 3002 00:22:10.305 nvme_completions: 5467 00:22:10.305 submitted_requests: 8276 00:22:10.305 queued_requests: 1 00:22:10.305 ======================================================== 00:22:10.305 Latency(us) 00:22:10.305 Device Information : IOPS MiB/s Average min max 00:22:10.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1205.41 301.35 109013.63 77596.52 204468.35 00:22:10.305 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1363.56 340.89 92311.43 56545.16 129486.67 00:22:10.305 ======================================================== 00:22:10.305 Total : 2568.97 642.24 100148.40 56545.16 204468.35 00:22:10.305 00:22:10.305 15:59:07 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:10.305 15:59:07 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.305 15:59:07 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:10.305 15:59:07 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:10.305 15:59:07 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:10.306 15:59:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:10.306 15:59:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:10.306 15:59:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:10.306 15:59:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:10.306 15:59:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:10.306 15:59:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:10.306 rmmod nvme_tcp 00:22:10.564 rmmod nvme_fabrics 00:22:10.564 rmmod nvme_keyring 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 814014 ']' 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 814014 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 814014 ']' 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 814014 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 814014 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 814014' 00:22:10.564 killing process with pid 814014 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 814014 00:22:10.564 15:59:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 814014 00:22:12.474 15:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:12.474 15:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:12.474 15:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:12.474 15:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.474 15:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:12.474 15:59:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.474 15:59:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.474 15:59:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.382 15:59:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:14.382 00:22:14.382 real 0m21.060s 00:22:14.382 user 1m4.590s 00:22:14.382 sys 0m5.652s 00:22:14.382 15:59:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:14.382 15:59:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:14.382 ************************************ 00:22:14.382 END TEST nvmf_perf 00:22:14.382 ************************************ 00:22:14.382 15:59:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:14.382 15:59:11 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:14.382 15:59:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:14.382 15:59:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:14.382 15:59:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:14.382 ************************************ 00:22:14.382 START TEST nvmf_fio_host 00:22:14.382 ************************************ 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:14.382 * Looking for test storage... 00:22:14.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:14.382 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:14.383 15:59:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.280 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:16.281 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:16.281 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:16.281 Found net devices under 0000:84:00.0: cvl_0_0 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:16.281 Found net devices under 0000:84:00.1: cvl_0_1 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.281 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:16.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:22:16.538 00:22:16.538 --- 10.0.0.2 ping statistics --- 00:22:16.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.538 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:22:16.538 00:22:16.538 --- 10.0.0.1 ping statistics --- 00:22:16.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.538 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=817985 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 817985 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 817985 ']' 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.538 15:59:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.538 [2024-07-12 15:59:13.715453] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:22:16.539 [2024-07-12 15:59:13.715554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.539 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.539 [2024-07-12 15:59:13.779667] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.795 [2024-07-12 15:59:13.883044] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.795 [2024-07-12 15:59:13.883109] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.795 [2024-07-12 15:59:13.883123] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.795 [2024-07-12 15:59:13.883134] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.795 [2024-07-12 15:59:13.883144] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.796 [2024-07-12 15:59:13.883243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.796 [2024-07-12 15:59:13.883306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.796 [2024-07-12 15:59:13.883376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.796 [2024-07-12 15:59:13.883374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.796 15:59:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.796 15:59:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:22:16.796 15:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:17.052 [2024-07-12 15:59:14.284455] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.052 15:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:17.052 15:59:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:17.052 15:59:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:17.052 15:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:17.310 Malloc1 00:22:17.567 15:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.567 15:59:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:17.824 15:59:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.080 [2024-07-12 15:59:15.319802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.081 15:59:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:18.337 15:59:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:18.337 15:59:15 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:18.337 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:18.337 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:18.337 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:18.337 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:18.337 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:18.337 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:18.337 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:18.337 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:18.338 15:59:15 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:18.594 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:18.594 fio-3.35 00:22:18.594 Starting 1 thread 00:22:18.594 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.122 [2024-07-12 15:59:18.116846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c13d10 is same with the state(5) to be set 00:22:21.122 [2024-07-12 15:59:18.116913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c13d10 is same with the state(5) to be set 00:22:21.122 [2024-07-12 15:59:18.116929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c13d10 is same with the state(5) to be set 00:22:21.122 [2024-07-12 15:59:18.116941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c13d10 is same with the state(5) to be set 00:22:21.122 00:22:21.122 test: (groupid=0, jobs=1): err= 0: pid=818341: Fri Jul 12 15:59:18 2024 00:22:21.122 read: IOPS=9105, BW=35.6MiB/s (37.3MB/s)(71.3MiB/2006msec) 00:22:21.122 slat (usec): min=2, max=159, avg= 3.15, stdev= 2.35 00:22:21.122 clat (usec): min=2449, max=12584, avg=7680.00, stdev=596.96 00:22:21.122 lat (usec): min=2472, max=12588, avg=7683.15, stdev=596.84 00:22:21.122 clat percentiles (usec): 00:22:21.122 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7177], 00:22:21.122 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:22:21.122 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8356], 95.00th=[ 8586], 00:22:21.122 | 99.00th=[ 8979], 99.50th=[ 9241], 99.90th=[11207], 99.95th=[11731], 00:22:21.122 | 99.99th=[12518] 00:22:21.122 bw ( KiB/s): min=35480, max=36944, per=99.92%, avg=36390.00, stdev=632.03, samples=4 00:22:21.122 iops : min= 8870, max= 9236, avg=9097.50, stdev=158.01, samples=4 00:22:21.122 write: IOPS=9118, BW=35.6MiB/s (37.3MB/s)(71.4MiB/2006msec); 0 zone resets 00:22:21.122 slat (usec): min=2, max=133, avg= 3.30, stdev= 1.96 00:22:21.122 clat (usec): min=1424, max=11349, avg=6317.64, stdev=502.45 00:22:21.122 lat (usec): min=1431, max=11352, avg=6320.95, stdev=502.37 00:22:21.122 clat percentiles (usec): 00:22:21.122 | 1.00th=[ 5211], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:22:21.122 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6456], 00:22:21.122 | 70.00th=[ 6587], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7046], 00:22:21.122 | 99.00th=[ 7504], 99.50th=[ 7635], 99.90th=[ 9110], 99.95th=[10421], 00:22:21.122 | 99.99th=[11338] 00:22:21.122 bw ( KiB/s): min=36240, max=36800, per=99.99%, avg=36468.00, stdev=266.97, samples=4 00:22:21.122 iops : min= 9060, max= 9200, avg=9117.00, stdev=66.74, samples=4 00:22:21.122 lat (msec) : 2=0.03%, 4=0.11%, 10=99.74%, 20=0.12% 00:22:21.122 cpu : usr=67.48%, sys=30.77%, ctx=109, majf=0, minf=28 00:22:21.122 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:21.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.122 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:21.122 issued rwts: total=18265,18291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.122 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:21.122 00:22:21.122 Run status group 0 (all jobs): 00:22:21.122 READ: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=71.3MiB (74.8MB), run=2006-2006msec 00:22:21.122 WRITE: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=71.4MiB (74.9MB), run=2006-2006msec 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:21.122 15:59:18 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:21.122 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:21.122 fio-3.35 00:22:21.122 Starting 1 thread 00:22:21.122 EAL: No free 2048 kB hugepages reported on node 1 00:22:23.648 00:22:23.648 test: (groupid=0, jobs=1): err= 0: pid=818679: Fri Jul 12 15:59:20 2024 00:22:23.648 read: IOPS=8165, BW=128MiB/s (134MB/s)(256MiB/2008msec) 00:22:23.648 slat (usec): min=3, max=131, avg= 4.56, stdev= 2.58 00:22:23.648 clat (usec): min=1666, max=17598, avg=9121.47, stdev=2330.06 00:22:23.648 lat (usec): min=1671, max=17602, avg=9126.04, stdev=2330.09 00:22:23.648 clat percentiles (usec): 00:22:23.648 | 1.00th=[ 4817], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 7111], 00:22:23.648 | 30.00th=[ 7635], 40.00th=[ 8225], 50.00th=[ 8848], 60.00th=[ 9634], 00:22:23.648 | 70.00th=[10421], 80.00th=[11076], 90.00th=[12125], 95.00th=[13304], 00:22:23.648 | 99.00th=[15008], 99.50th=[15664], 99.90th=[16909], 99.95th=[17171], 00:22:23.648 | 99.99th=[17433] 00:22:23.648 bw ( KiB/s): min=60992, max=75680, per=51.72%, avg=67576.00, stdev=7452.38, samples=4 00:22:23.648 iops : min= 3812, max= 4730, avg=4223.50, stdev=465.77, samples=4 00:22:23.648 write: IOPS=4841, BW=75.6MiB/s (79.3MB/s)(138MiB/1829msec); 0 zone resets 00:22:23.648 slat (usec): min=30, max=162, avg=38.23, stdev= 6.29 00:22:23.648 clat (usec): min=2908, max=21280, avg=11564.37, stdev=1824.47 00:22:23.648 lat (usec): min=2949, max=21315, avg=11602.60, stdev=1824.30 00:22:23.648 clat percentiles (usec): 00:22:23.648 | 1.00th=[ 7963], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10159], 00:22:23.648 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:22:23.648 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13960], 95.00th=[14877], 00:22:23.648 | 99.00th=[16909], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:22:23.648 | 99.99th=[21365] 00:22:23.648 bw ( KiB/s): min=63008, max=79584, per=90.64%, avg=70216.00, stdev=8477.26, samples=4 00:22:23.648 iops : min= 3938, max= 4974, avg=4388.50, stdev=529.83, samples=4 00:22:23.648 lat (msec) : 2=0.02%, 4=0.20%, 10=48.60%, 20=51.18%, 50=0.01% 00:22:23.648 cpu : usr=82.46%, sys=15.99%, ctx=41, majf=0, minf=61 00:22:23.648 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:23.648 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:23.648 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:23.648 issued rwts: total=16396,8855,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:23.648 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:23.648 00:22:23.648 Run status group 0 (all jobs): 00:22:23.648 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=256MiB (269MB), run=2008-2008msec 00:22:23.648 WRITE: bw=75.6MiB/s (79.3MB/s), 75.6MiB/s-75.6MiB/s (79.3MB/s-79.3MB/s), io=138MiB (145MB), run=1829-1829msec 00:22:23.648 15:59:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.906 15:59:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:23.906 15:59:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:23.906 15:59:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:23.906 15:59:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:23.906 15:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.906 15:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:22:23.906 15:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.906 15:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:22:23.906 15:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.906 15:59:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.906 rmmod nvme_tcp 00:22:23.906 rmmod nvme_fabrics 00:22:23.906 rmmod nvme_keyring 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 817985 ']' 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 817985 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 817985 ']' 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 817985 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 817985 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 817985' 00:22:23.906 killing process with pid 817985 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 817985 00:22:23.906 15:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 817985 00:22:24.165 15:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:24.165 15:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:24.165 15:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:24.165 15:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:24.165 15:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:24.165 15:59:21 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.165 15:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.165 15:59:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.109 15:59:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.109 00:22:26.109 real 0m11.919s 00:22:26.109 user 0m35.248s 00:22:26.109 sys 0m3.754s 00:22:26.109 15:59:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:26.109 15:59:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.109 ************************************ 00:22:26.109 END TEST nvmf_fio_host 00:22:26.109 ************************************ 00:22:26.109 15:59:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:26.109 15:59:23 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:26.109 15:59:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:26.109 15:59:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.109 15:59:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:26.388 ************************************ 00:22:26.388 START TEST nvmf_failover 00:22:26.388 ************************************ 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:26.388 * Looking for test storage... 00:22:26.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.388 15:59:23 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.389 15:59:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:28.288 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:28.288 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:28.288 Found net devices under 0000:84:00.0: cvl_0_0 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:28.288 Found net devices under 0000:84:00.1: cvl_0_1 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.288 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.289 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:22:28.547 00:22:28.547 --- 10.0.0.2 ping statistics --- 00:22:28.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.547 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:22:28.547 00:22:28.547 --- 10.0.0.1 ping statistics --- 00:22:28.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.547 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=820889 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 820889 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 820889 ']' 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.547 15:59:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:28.547 [2024-07-12 15:59:25.687039] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:22:28.547 [2024-07-12 15:59:25.687126] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.547 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.547 [2024-07-12 15:59:25.754069] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:28.805 [2024-07-12 15:59:25.868109] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.805 [2024-07-12 15:59:25.868158] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.805 [2024-07-12 15:59:25.868182] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.805 [2024-07-12 15:59:25.868194] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.805 [2024-07-12 15:59:25.868204] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.805 [2024-07-12 15:59:25.868275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.805 [2024-07-12 15:59:25.868336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.805 [2024-07-12 15:59:25.868339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.805 15:59:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.805 15:59:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:28.805 15:59:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.805 15:59:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:28.805 15:59:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:28.805 15:59:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.805 15:59:26 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:29.062 [2024-07-12 15:59:26.288554] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.062 15:59:26 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:29.627 Malloc0 00:22:29.627 15:59:26 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:29.884 15:59:26 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:29.884 15:59:27 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.141 [2024-07-12 15:59:27.393460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.141 15:59:27 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:30.398 [2024-07-12 15:59:27.686405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:30.655 15:59:27 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:30.912 [2024-07-12 15:59:27.979280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:30.912 15:59:27 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=821250 00:22:30.912 15:59:27 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:30.912 15:59:27 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.912 15:59:27 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 821250 /var/tmp/bdevperf.sock 00:22:30.912 15:59:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 821250 ']' 00:22:30.912 15:59:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.912 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.912 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.912 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.912 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:31.169 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.169 15:59:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:31.169 15:59:28 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:31.732 NVMe0n1 00:22:31.732 15:59:28 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:31.989 00:22:31.989 15:59:29 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=821378 00:22:31.989 15:59:29 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:31.989 15:59:29 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:32.922 15:59:30 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:33.180 [2024-07-12 15:59:30.379810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.379881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.379905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.379919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.379931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.379945] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.379958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.379979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.379992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380232] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380295] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 [2024-07-12 15:59:30.380509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f2520 is same with the state(5) to be set 00:22:33.180 15:59:30 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:36.458 15:59:33 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:36.715 00:22:36.715 15:59:33 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:36.972 [2024-07-12 15:59:34.212530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.972 [2024-07-12 15:59:34.212593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.972 [2024-07-12 15:59:34.212608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.972 [2024-07-12 15:59:34.212620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.972 [2024-07-12 15:59:34.212632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.972 [2024-07-12 15:59:34.212643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.972 [2024-07-12 15:59:34.212654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.973 [2024-07-12 15:59:34.212666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.973 [2024-07-12 15:59:34.212677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.973 [2024-07-12 15:59:34.212689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.973 [2024-07-12 15:59:34.212701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.973 [2024-07-12 15:59:34.212762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.973 [2024-07-12 15:59:34.212778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.973 [2024-07-12 15:59:34.212791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.973 [2024-07-12 15:59:34.212803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.973 [2024-07-12 15:59:34.212816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.973 [2024-07-12 15:59:34.212828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.973 [2024-07-12 15:59:34.212841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f3490 is same with the state(5) to be set 00:22:36.973 15:59:34 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:40.249 15:59:37 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.249 [2024-07-12 15:59:37.493030] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.249 15:59:37 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:41.616 15:59:38 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:41.616 [2024-07-12 15:59:38.786982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787333] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 [2024-07-12 15:59:38.787344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad1a0 is same with the state(5) to be set 00:22:41.616 15:59:38 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 821378 00:22:48.178 0 00:22:48.178 15:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 821250 00:22:48.178 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 821250 ']' 00:22:48.178 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 821250 00:22:48.178 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:48.178 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:48.178 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 821250 00:22:48.178 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:48.178 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:48.178 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 821250' 00:22:48.178 killing process with pid 821250 00:22:48.178 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 821250 00:22:48.178 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 821250 00:22:48.178 15:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:48.178 [2024-07-12 15:59:28.044618] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:22:48.178 [2024-07-12 15:59:28.044704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid821250 ] 00:22:48.178 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.178 [2024-07-12 15:59:28.106560] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.178 [2024-07-12 15:59:28.217082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.178 Running I/O for 15 seconds... 00:22:48.178 [2024-07-12 15:59:30.382050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.178 [2024-07-12 15:59:30.382118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.178 [2024-07-12 15:59:30.382147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.178 [2024-07-12 15:59:30.382171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.178 [2024-07-12 15:59:30.382188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.178 [2024-07-12 15:59:30.382202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.178 [2024-07-12 15:59:30.382218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.178 [2024-07-12 15:59:30.382233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.178 [2024-07-12 15:59:30.382250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.178 [2024-07-12 15:59:30.382264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.178 [2024-07-12 15:59:30.382280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.178 [2024-07-12 15:59:30.382295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.178 [2024-07-12 15:59:30.382311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.178 [2024-07-12 15:59:30.382326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.178 [2024-07-12 15:59:30.382341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.178 [2024-07-12 15:59:30.382355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.178 [2024-07-12 15:59:30.382370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.178 [2024-07-12 15:59:30.382384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.178 [2024-07-12 15:59:30.382399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.178 [2024-07-12 15:59:30.382414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.178 [2024-07-12 15:59:30.382429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.178 [2024-07-12 15:59:30.382443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.382980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.382995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.179 [2024-07-12 15:59:30.383282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.179 [2024-07-12 15:59:30.383316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.179 [2024-07-12 15:59:30.383345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.179 [2024-07-12 15:59:30.383375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.179 [2024-07-12 15:59:30.383404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.179 [2024-07-12 15:59:30.383434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.179 [2024-07-12 15:59:30.383807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.179 [2024-07-12 15:59:30.383822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.383839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.383853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.383869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.383884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.383900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.383915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.383931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.383946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.383963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.383978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.383994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.180 [2024-07-12 15:59:30.384698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.180 [2024-07-12 15:59:30.384728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.384969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.384983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.385000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.385014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.385040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.385069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.385086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.385100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.385116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.385130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.385146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.385161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.385176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.180 [2024-07-12 15:59:30.385190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.180 [2024-07-12 15:59:30.385206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.181 [2024-07-12 15:59:30.385220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.181 [2024-07-12 15:59:30.385251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.181 [2024-07-12 15:59:30.385280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.181 [2024-07-12 15:59:30.385309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.181 [2024-07-12 15:59:30.385343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.181 [2024-07-12 15:59:30.385373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.181 [2024-07-12 15:59:30.385403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.385449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83408 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.385463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.385700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.385713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83416 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.385728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.385795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.385808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83424 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.385821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385837] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.385849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.385861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83432 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.385875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.385901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.385913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82544 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.385927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.385953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.385965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82552 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.385979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.385993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82560 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82568 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82576 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82584 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82592 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83440 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386325] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83448 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83456 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83464 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83472 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83480 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386570] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83488 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82600 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82608 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.181 [2024-07-12 15:59:30.386759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82616 len:8 PRP1 0x0 PRP2 0x0 00:22:48.181 [2024-07-12 15:59:30.386786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.181 [2024-07-12 15:59:30.386801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.181 [2024-07-12 15:59:30.386813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.386825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82624 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.386841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.386855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.386866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.386879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82632 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.386893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.386906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.386918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.386929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82640 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.386943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.386956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.386968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.386979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82648 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.386992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82656 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82472 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82664 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82672 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82680 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387285] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82688 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82696 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387393] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82704 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82712 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82720 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82728 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82736 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82744 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82752 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387763] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82760 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82768 len:8 PRP1 0x0 PRP2 0x0 00:22:48.182 [2024-07-12 15:59:30.387856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.182 [2024-07-12 15:59:30.387870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.182 [2024-07-12 15:59:30.387882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.182 [2024-07-12 15:59:30.387894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82776 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.387907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.387921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.387933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.387945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82784 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.387958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.387972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.387983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.387995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82792 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82800 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82808 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82816 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82824 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82832 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388301] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82840 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82848 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82856 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388450] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82864 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82872 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82880 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82888 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82896 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82904 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82912 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82920 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388889] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82928 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.388944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.388956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.388968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82480 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.388986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.389001] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.389013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.183 [2024-07-12 15:59:30.389025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82488 len:8 PRP1 0x0 PRP2 0x0 00:22:48.183 [2024-07-12 15:59:30.389038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.183 [2024-07-12 15:59:30.389052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.183 [2024-07-12 15:59:30.389080] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.389092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82496 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.389105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.389119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.389130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.389142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82504 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.389155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.389168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.389179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.389191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82512 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.389208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.389221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.389233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.389245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82520 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.389257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.389271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.389282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.389293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82936 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.389306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.389322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.389334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.389346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82944 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.389359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.389372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.389383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.389394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82952 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.389412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.389425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.389437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.389449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82960 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.389462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.389475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.389486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.389498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82968 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.389511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.389524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.389535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.389547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82976 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.389560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.389573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.389584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.389595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82984 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.389612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.389626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.389637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.389649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82992 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.389661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.395887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.395915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.395932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83000 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.395952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.395967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.395980] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.395992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83008 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.396006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.396020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.396046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.396058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83016 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.396072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.396086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.396098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.396109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83024 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.396122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.396135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.396146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.396157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83032 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.396171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.396184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.396195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.396207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83040 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.396220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.396234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.396245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.396256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83048 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.396270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.396283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.396295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.396307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83056 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.396320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.396333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.396344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.396359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83064 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.396373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.184 [2024-07-12 15:59:30.396386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.184 [2024-07-12 15:59:30.396398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.184 [2024-07-12 15:59:30.396409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83072 len:8 PRP1 0x0 PRP2 0x0 00:22:48.184 [2024-07-12 15:59:30.396422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.396436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.396448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.396459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83080 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.396472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.396486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.396497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.396509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83088 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.396522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.396535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.396547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.396558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83096 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.396571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.396585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.396596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.396608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83104 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.396621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.396635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.396646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.396657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83112 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.396671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.396684] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.396696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.396708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83120 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.396735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.396760] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.396776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.396789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83128 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.396803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.396817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.396829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.396841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83136 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.396855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.396868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.396880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.396891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83144 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.396905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.396918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.396930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.396942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83152 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.396956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.396970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.396981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.396993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83160 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.397007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.397020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.397032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.397044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83168 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.397057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.397087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.397099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.397111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83176 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.397124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.397138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.397149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.397160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83184 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.397173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.397190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.397202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.397213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83192 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.397226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.397240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.397251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.397263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83200 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.397276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.397289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.397300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.397312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83208 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.397325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.397338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.397349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.397361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83216 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.397374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.397388] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.397399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.397410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83224 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.397423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.397436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.397448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.397459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83232 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.397472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.397486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.397497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.397509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83240 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.397522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.397536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.397548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.185 [2024-07-12 15:59:30.397559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82528 len:8 PRP1 0x0 PRP2 0x0 00:22:48.185 [2024-07-12 15:59:30.397576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.185 [2024-07-12 15:59:30.397591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.185 [2024-07-12 15:59:30.397602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.397615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82536 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.397628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.397642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.397653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.397665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83248 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.397678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.397692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.397704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.397716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83256 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.397729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.397767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.397780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.397793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83264 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.397807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.397822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.397833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.397846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83272 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.397859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.397874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.397885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.397897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83280 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.397911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.397924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.397936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.397948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83288 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.397962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.397976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.397991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83296 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83304 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83312 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83320 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83328 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83336 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83344 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398359] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83352 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83360 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83368 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83376 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83384 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83392 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83400 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.186 [2024-07-12 15:59:30.398712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.186 [2024-07-12 15:59:30.398746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83408 len:8 PRP1 0x0 PRP2 0x0 00:22:48.186 [2024-07-12 15:59:30.398761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.186 [2024-07-12 15:59:30.398820] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ee3af0 was disconnected and freed. reset controller. 00:22:48.187 [2024-07-12 15:59:30.398839] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:48.187 [2024-07-12 15:59:30.398878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.187 [2024-07-12 15:59:30.398902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:30.398919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.187 [2024-07-12 15:59:30.398941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:30.398956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.187 [2024-07-12 15:59:30.398969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:30.398984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.187 [2024-07-12 15:59:30.399006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:30.399020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.187 [2024-07-12 15:59:30.399082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebd8d0 (9): Bad file descriptor 00:22:48.187 [2024-07-12 15:59:30.402368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:48.187 [2024-07-12 15:59:30.563148] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:48.187 [2024-07-12 15:59:34.213920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.213967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.213997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.187 [2024-07-12 15:59:34.214397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.187 [2024-07-12 15:59:34.214412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.188 [2024-07-12 15:59:34.214824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.188 [2024-07-12 15:59:34.214838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.214854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.189 [2024-07-12 15:59:34.214868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.214884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.189 [2024-07-12 15:59:34.214899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.214915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.189 [2024-07-12 15:59:34.214929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.214944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.189 [2024-07-12 15:59:34.214959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.214974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.189 [2024-07-12 15:59:34.214991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.189 [2024-07-12 15:59:34.215037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.189 [2024-07-12 15:59:34.215066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.189 [2024-07-12 15:59:34.215095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.189 [2024-07-12 15:59:34.215124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.189 [2024-07-12 15:59:34.215153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.189 [2024-07-12 15:59:34.215182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.189 [2024-07-12 15:59:34.215894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.189 [2024-07-12 15:59:34.215911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.215925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.215941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.215956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.215972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.215986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.216982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.216997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.217013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.217027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.217043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.217072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.217088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.217101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.217117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.217131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.217146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.217160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.217175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.217189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.217204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.190 [2024-07-12 15:59:34.217218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.190 [2024-07-12 15:59:34.217259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4504 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4520 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4528 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4536 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217509] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4552 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4560 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4568 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4584 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4592 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4600 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.217955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.217967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.217981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.217995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.218006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.218018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4616 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.218047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.218061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.218072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.218083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4624 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.218096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.218109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.218121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.218132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4632 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.218145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.218159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.218170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.218181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.218194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.218207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.218218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.218233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4648 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.218246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.218261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.218273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.218284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4656 len:8 PRP1 0x0 PRP2 0x0 00:22:48.191 [2024-07-12 15:59:34.218297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.191 [2024-07-12 15:59:34.218311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.191 [2024-07-12 15:59:34.218332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.191 [2024-07-12 15:59:34.218345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4664 len:8 PRP1 0x0 PRP2 0x0 00:22:48.192 [2024-07-12 15:59:34.218358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:34.218371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.192 [2024-07-12 15:59:34.218382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.192 [2024-07-12 15:59:34.218394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:8 PRP1 0x0 PRP2 0x0 00:22:48.192 [2024-07-12 15:59:34.218407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:34.218420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.192 [2024-07-12 15:59:34.218431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.192 [2024-07-12 15:59:34.218443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4680 len:8 PRP1 0x0 PRP2 0x0 00:22:48.192 [2024-07-12 15:59:34.218456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:34.218469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.192 [2024-07-12 15:59:34.218480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.192 [2024-07-12 15:59:34.218491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3976 len:8 PRP1 0x0 PRP2 0x0 00:22:48.192 [2024-07-12 15:59:34.218504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:34.218562] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2088600 was disconnected and freed. reset controller. 00:22:48.192 [2024-07-12 15:59:34.218581] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:48.192 [2024-07-12 15:59:34.218615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.192 [2024-07-12 15:59:34.218641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:34.218656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.192 [2024-07-12 15:59:34.218669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:34.218694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.192 [2024-07-12 15:59:34.218708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:34.218751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.192 [2024-07-12 15:59:34.218769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:34.218783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.192 [2024-07-12 15:59:34.218836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebd8d0 (9): Bad file descriptor 00:22:48.192 [2024-07-12 15:59:34.222087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:48.192 [2024-07-12 15:59:34.259379] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:48.192 [2024-07-12 15:59:38.789346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.789971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.789987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.790002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.790017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.790031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.790063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.790078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.790094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.790107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.790123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.790137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.790152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.790170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.790186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.790200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.790215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.790229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.790244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.192 [2024-07-12 15:59:38.790258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.790274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.192 [2024-07-12 15:59:38.790288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.790304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.192 [2024-07-12 15:59:38.790318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.790333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.192 [2024-07-12 15:59:38.790347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.192 [2024-07-12 15:59:38.790362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.790972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.790989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.193 [2024-07-12 15:59:38.791406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.193 [2024-07-12 15:59:38.791421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.194 [2024-07-12 15:59:38.791435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.194 [2024-07-12 15:59:38.791465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.194 [2024-07-12 15:59:38.791494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.194 [2024-07-12 15:59:38.791524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.194 [2024-07-12 15:59:38.791552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.194 [2024-07-12 15:59:38.791583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.194 [2024-07-12 15:59:38.791611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.194 [2024-07-12 15:59:38.791640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.194 [2024-07-12 15:59:38.791670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.194 [2024-07-12 15:59:38.791698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:48.194 [2024-07-12 15:59:38.791757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.791811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.791824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.791855] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.791866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.791880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.791904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.791916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.791932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.791957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.791969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.791983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.791997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79128 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79136 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79144 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79152 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79160 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79168 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79176 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79184 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792510] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79192 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79200 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79208 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.194 [2024-07-12 15:59:38.792684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79216 len:8 PRP1 0x0 PRP2 0x0 00:22:48.194 [2024-07-12 15:59:38.792697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.194 [2024-07-12 15:59:38.792710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.194 [2024-07-12 15:59:38.792743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.792757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79224 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.792772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.792787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.792798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.792811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79232 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.792825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.792840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.792852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.792864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79240 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.792878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.792892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.792904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.792916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79248 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.792930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.792945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.792956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.792968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79256 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.792986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79264 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79272 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79280 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79288 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79296 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793281] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79304 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79312 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79320 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793433] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79328 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79336 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79344 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79352 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79360 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79368 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79376 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79384 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79392 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793901] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79400 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.793951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.793963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.793975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79408 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.793988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.794002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.794013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.794040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79416 len:8 PRP1 0x0 PRP2 0x0 00:22:48.195 [2024-07-12 15:59:38.794054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.195 [2024-07-12 15:59:38.794069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.195 [2024-07-12 15:59:38.794081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.195 [2024-07-12 15:59:38.794092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79424 len:8 PRP1 0x0 PRP2 0x0 00:22:48.196 [2024-07-12 15:59:38.794106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.794119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.196 [2024-07-12 15:59:38.794130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.196 [2024-07-12 15:59:38.794141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79432 len:8 PRP1 0x0 PRP2 0x0 00:22:48.196 [2024-07-12 15:59:38.794155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.794168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.196 [2024-07-12 15:59:38.794179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.196 [2024-07-12 15:59:38.794191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79440 len:8 PRP1 0x0 PRP2 0x0 00:22:48.196 [2024-07-12 15:59:38.794204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.794221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.196 [2024-07-12 15:59:38.794233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.196 [2024-07-12 15:59:38.810008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79448 len:8 PRP1 0x0 PRP2 0x0 00:22:48.196 [2024-07-12 15:59:38.810052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.810069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.196 [2024-07-12 15:59:38.810081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.196 [2024-07-12 15:59:38.810093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79456 len:8 PRP1 0x0 PRP2 0x0 00:22:48.196 [2024-07-12 15:59:38.810106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.810119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.196 [2024-07-12 15:59:38.810130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.196 [2024-07-12 15:59:38.810142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79464 len:8 PRP1 0x0 PRP2 0x0 00:22:48.196 [2024-07-12 15:59:38.810154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.810168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.196 [2024-07-12 15:59:38.810179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.196 [2024-07-12 15:59:38.810190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79472 len:8 PRP1 0x0 PRP2 0x0 00:22:48.196 [2024-07-12 15:59:38.810203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.810216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.196 [2024-07-12 15:59:38.810227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.196 [2024-07-12 15:59:38.810238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79480 len:8 PRP1 0x0 PRP2 0x0 00:22:48.196 [2024-07-12 15:59:38.810251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.810264] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.196 [2024-07-12 15:59:38.810276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.196 [2024-07-12 15:59:38.810287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79488 len:8 PRP1 0x0 PRP2 0x0 00:22:48.196 [2024-07-12 15:59:38.810302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.810316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.196 [2024-07-12 15:59:38.810327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.196 [2024-07-12 15:59:38.810338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79496 len:8 PRP1 0x0 PRP2 0x0 00:22:48.196 [2024-07-12 15:59:38.810352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.810365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:48.196 [2024-07-12 15:59:38.810376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:48.196 [2024-07-12 15:59:38.810392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79504 len:8 PRP1 0x0 PRP2 0x0 00:22:48.196 [2024-07-12 15:59:38.810406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.810470] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20882c0 was disconnected and freed. reset controller. 00:22:48.196 [2024-07-12 15:59:38.810489] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:48.196 [2024-07-12 15:59:38.810527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.196 [2024-07-12 15:59:38.810546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.810561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.196 [2024-07-12 15:59:38.810575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.810589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.196 [2024-07-12 15:59:38.810604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.810618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.196 [2024-07-12 15:59:38.810631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.196 [2024-07-12 15:59:38.810644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:48.196 [2024-07-12 15:59:38.810709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ebd8d0 (9): Bad file descriptor 00:22:48.196 [2024-07-12 15:59:38.814014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:48.196 [2024-07-12 15:59:38.850819] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:48.196 00:22:48.196 Latency(us) 00:22:48.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.196 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:48.196 Verification LBA range: start 0x0 length 0x4000 00:22:48.196 NVMe0n1 : 15.01 8816.30 34.44 613.27 0.00 13548.30 512.76 32234.00 00:22:48.196 =================================================================================================================== 00:22:48.196 Total : 8816.30 34.44 613.27 0.00 13548.30 512.76 32234.00 00:22:48.196 Received shutdown signal, test time was about 15.000000 seconds 00:22:48.196 00:22:48.196 Latency(us) 00:22:48.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.196 =================================================================================================================== 00:22:48.196 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=823153 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 823153 /var/tmp/bdevperf.sock 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 823153 ']' 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:48.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:22:48.196 15:59:44 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:48.196 [2024-07-12 15:59:45.164008] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:48.196 15:59:45 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:48.196 [2024-07-12 15:59:45.408665] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:48.196 15:59:45 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:48.762 NVMe0n1 00:22:48.762 15:59:45 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.327 00:22:49.327 15:59:46 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:49.585 00:22:49.585 15:59:46 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:49.585 15:59:46 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:49.842 15:59:47 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:50.099 15:59:47 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:53.378 15:59:50 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:53.378 15:59:50 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:53.378 15:59:50 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=823861 00:22:53.378 15:59:50 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.378 15:59:50 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 823861 00:22:54.748 0 00:22:54.748 15:59:51 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:54.748 [2024-07-12 15:59:44.630748] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:22:54.748 [2024-07-12 15:59:44.630846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823153 ] 00:22:54.748 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.748 [2024-07-12 15:59:44.691208] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.748 [2024-07-12 15:59:44.797957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.748 [2024-07-12 15:59:47.289414] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:54.748 [2024-07-12 15:59:47.289494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.749 [2024-07-12 15:59:47.289516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.749 [2024-07-12 15:59:47.289532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.749 [2024-07-12 15:59:47.289551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.749 [2024-07-12 15:59:47.289565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.749 [2024-07-12 15:59:47.289578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.749 [2024-07-12 15:59:47.289592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.749 [2024-07-12 15:59:47.289606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.749 [2024-07-12 15:59:47.289620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:54.749 [2024-07-12 15:59:47.289674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:54.749 [2024-07-12 15:59:47.289704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f98d0 (9): Bad file descriptor 00:22:54.749 [2024-07-12 15:59:47.300442] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:54.749 Running I/O for 1 seconds... 00:22:54.749 00:22:54.749 Latency(us) 00:22:54.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.749 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:54.749 Verification LBA range: start 0x0 length 0x4000 00:22:54.749 NVMe0n1 : 1.01 8715.71 34.05 0.00 0.00 14619.77 2961.26 12136.30 00:22:54.749 =================================================================================================================== 00:22:54.749 Total : 8715.71 34.05 0.00 0.00 14619.77 2961.26 12136.30 00:22:54.749 15:59:51 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:54.749 15:59:51 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:54.749 15:59:51 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.006 15:59:52 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.006 15:59:52 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:55.263 15:59:52 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.519 15:59:52 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:58.884 15:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:58.884 15:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:58.884 15:59:55 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 823153 00:22:58.885 15:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 823153 ']' 00:22:58.885 15:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 823153 00:22:58.885 15:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:58.885 15:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:58.885 15:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 823153 00:22:58.885 15:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:58.885 15:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:58.885 15:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 823153' 00:22:58.885 killing process with pid 823153 00:22:58.885 15:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 823153 00:22:58.885 15:59:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 823153 00:22:59.142 15:59:56 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:59.142 15:59:56 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:59.399 rmmod nvme_tcp 00:22:59.399 rmmod nvme_fabrics 00:22:59.399 rmmod nvme_keyring 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 820889 ']' 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 820889 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 820889 ']' 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 820889 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 820889 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 820889' 00:22:59.399 killing process with pid 820889 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 820889 00:22:59.399 15:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 820889 00:22:59.658 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:59.658 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:59.658 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:59.658 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.658 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:59.658 15:59:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.658 15:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.658 15:59:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.188 15:59:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:02.188 00:23:02.188 real 0m35.481s 00:23:02.188 user 2m4.864s 00:23:02.188 sys 0m6.229s 00:23:02.188 15:59:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:02.188 15:59:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:02.188 ************************************ 00:23:02.188 END TEST nvmf_failover 00:23:02.188 ************************************ 00:23:02.188 15:59:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:02.188 15:59:58 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:02.188 15:59:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:02.188 15:59:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:02.188 15:59:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:02.188 ************************************ 00:23:02.188 START TEST nvmf_host_discovery 00:23:02.188 ************************************ 00:23:02.188 15:59:58 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:02.188 * Looking for test storage... 00:23:02.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:02.188 15:59:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:02.188 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:02.188 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.188 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.188 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.188 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.188 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.188 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.188 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.188 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:02.189 15:59:59 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.087 16:00:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:04.087 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:04.087 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:04.087 Found net devices under 0000:84:00.0: cvl_0_0 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:04.087 Found net devices under 0000:84:00.1: cvl_0_1 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:04.087 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:04.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:04.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:23:04.088 00:23:04.088 --- 10.0.0.2 ping statistics --- 00:23:04.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.088 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:04.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:04.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:23:04.088 00:23:04.088 --- 10.0.0.1 ping statistics --- 00:23:04.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:04.088 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=826630 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 826630 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 826630 ']' 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.088 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.088 [2024-07-12 16:00:01.218944] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:23:04.088 [2024-07-12 16:00:01.219021] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.088 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.088 [2024-07-12 16:00:01.281966] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.345 [2024-07-12 16:00:01.391306] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.345 [2024-07-12 16:00:01.391369] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.345 [2024-07-12 16:00:01.391392] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.345 [2024-07-12 16:00:01.391403] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.345 [2024-07-12 16:00:01.391413] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.345 [2024-07-12 16:00:01.391438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.345 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.345 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:04.345 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:04.345 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:04.345 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.345 16:00:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.345 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:04.345 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.346 [2024-07-12 16:00:01.523231] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.346 [2024-07-12 16:00:01.531362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.346 null0 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.346 null1 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=826677 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 826677 /tmp/host.sock 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 826677 ']' 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:04.346 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.346 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.346 [2024-07-12 16:00:01.605555] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:23:04.346 [2024-07-12 16:00:01.605645] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid826677 ] 00:23:04.346 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.603 [2024-07-12 16:00:01.666615] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.603 [2024-07-12 16:00:01.777231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.603 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.603 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:04.603 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:04.603 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:04.603 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.603 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.603 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.603 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:04.603 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.603 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:04.859 16:00:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:04.859 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.114 [2024-07-12 16:00:02.169139] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:05.114 16:00:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:05.678 [2024-07-12 16:00:02.950387] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:05.678 [2024-07-12 16:00:02.950417] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:05.678 [2024-07-12 16:00:02.950439] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:05.935 [2024-07-12 16:00:03.079912] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:05.935 [2024-07-12 16:00:03.142621] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:05.935 [2024-07-12 16:00:03.142645] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:06.192 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:06.192 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:06.192 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:06.192 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:06.193 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:06.451 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.709 [2024-07-12 16:00:03.806049] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:06.709 [2024-07-12 16:00:03.806687] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:06.709 [2024-07-12 16:00:03.806731] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.709 [2024-07-12 16:00:03.933527] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:06.709 16:00:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:06.967 [2024-07-12 16:00:04.238182] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:06.967 [2024-07-12 16:00:04.238210] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:06.967 [2024-07-12 16:00:04.238220] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.923 16:00:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.923 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:07.923 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:07.923 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:07.923 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:07.923 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:07.923 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.923 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.923 [2024-07-12 16:00:05.030615] bdev_nvme.c:6970:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:07.923 [2024-07-12 16:00:05.030659] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:07.923 [2024-07-12 16:00:05.032716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.923 [2024-07-12 16:00:05.032788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.923 [2024-07-12 16:00:05.032809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.923 [2024-07-12 16:00:05.032823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.923 [2024-07-12 16:00:05.032837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.923 [2024-07-12 16:00:05.032851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.923 [2024-07-12 16:00:05.032865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.924 [2024-07-12 16:00:05.032887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.924 [2024-07-12 16:00:05.032901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab73d0 is same with the state(5) to be set 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:07.924 [2024-07-12 16:00:05.042715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab73d0 (9): Bad file descriptor 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.924 [2024-07-12 16:00:05.052764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:07.924 [2024-07-12 16:00:05.053007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.924 [2024-07-12 16:00:05.053039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab73d0 with addr=10.0.0.2, port=4420 00:23:07.924 [2024-07-12 16:00:05.053070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab73d0 is same with the state(5) to be set 00:23:07.924 [2024-07-12 16:00:05.053092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab73d0 (9): Bad file descriptor 00:23:07.924 [2024-07-12 16:00:05.053113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:07.924 [2024-07-12 16:00:05.053127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:07.924 [2024-07-12 16:00:05.053142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:07.924 [2024-07-12 16:00:05.053161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.924 [2024-07-12 16:00:05.062848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:07.924 [2024-07-12 16:00:05.063010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.924 [2024-07-12 16:00:05.063053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab73d0 with addr=10.0.0.2, port=4420 00:23:07.924 [2024-07-12 16:00:05.063070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab73d0 is same with the state(5) to be set 00:23:07.924 [2024-07-12 16:00:05.063106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab73d0 (9): Bad file descriptor 00:23:07.924 [2024-07-12 16:00:05.063139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:07.924 [2024-07-12 16:00:05.063157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:07.924 [2024-07-12 16:00:05.063170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:07.924 [2024-07-12 16:00:05.063188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.924 [2024-07-12 16:00:05.072921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:07.924 [2024-07-12 16:00:05.073120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.924 [2024-07-12 16:00:05.073147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab73d0 with addr=10.0.0.2, port=4420 00:23:07.924 [2024-07-12 16:00:05.073162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab73d0 is same with the state(5) to be set 00:23:07.924 [2024-07-12 16:00:05.073183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab73d0 (9): Bad file descriptor 00:23:07.924 [2024-07-12 16:00:05.073203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:07.924 [2024-07-12 16:00:05.073217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:07.924 [2024-07-12 16:00:05.073230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:07.924 [2024-07-12 16:00:05.073249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:07.924 [2024-07-12 16:00:05.082993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:07.924 [2024-07-12 16:00:05.083220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.924 [2024-07-12 16:00:05.083248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab73d0 with addr=10.0.0.2, port=4420 00:23:07.924 [2024-07-12 16:00:05.083264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab73d0 is same with the state(5) to be set 00:23:07.924 [2024-07-12 16:00:05.083285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab73d0 (9): Bad file descriptor 00:23:07.924 [2024-07-12 16:00:05.083318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:07.924 [2024-07-12 16:00:05.083340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:07.924 [2024-07-12 16:00:05.083366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:07.924 [2024-07-12 16:00:05.083385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:07.924 [2024-07-12 16:00:05.093085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:07.924 [2024-07-12 16:00:05.093254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.924 [2024-07-12 16:00:05.093280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab73d0 with addr=10.0.0.2, port=4420 00:23:07.924 [2024-07-12 16:00:05.093296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab73d0 is same with the state(5) to be set 00:23:07.924 [2024-07-12 16:00:05.093317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab73d0 (9): Bad file descriptor 00:23:07.924 [2024-07-12 16:00:05.093348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:07.924 [2024-07-12 16:00:05.093366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:07.924 [2024-07-12 16:00:05.093380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:07.924 [2024-07-12 16:00:05.093399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.924 [2024-07-12 16:00:05.103159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:07.924 [2024-07-12 16:00:05.103286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.924 [2024-07-12 16:00:05.103312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab73d0 with addr=10.0.0.2, port=4420 00:23:07.924 [2024-07-12 16:00:05.103327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab73d0 is same with the state(5) to be set 00:23:07.924 [2024-07-12 16:00:05.103348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab73d0 (9): Bad file descriptor 00:23:07.924 [2024-07-12 16:00:05.103381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:07.924 [2024-07-12 16:00:05.103399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:07.924 [2024-07-12 16:00:05.103413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:07.924 [2024-07-12 16:00:05.103443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.924 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.924 [2024-07-12 16:00:05.113228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:07.924 [2024-07-12 16:00:05.113398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.924 [2024-07-12 16:00:05.113424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab73d0 with addr=10.0.0.2, port=4420 00:23:07.924 [2024-07-12 16:00:05.113440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xab73d0 is same with the state(5) to be set 00:23:07.924 [2024-07-12 16:00:05.113461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xab73d0 (9): Bad file descriptor 00:23:07.924 [2024-07-12 16:00:05.113498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:07.924 [2024-07-12 16:00:05.113515] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:07.924 [2024-07-12 16:00:05.113528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:07.924 [2024-07-12 16:00:05.113547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.924 [2024-07-12 16:00:05.119506] bdev_nvme.c:6775:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:07.924 [2024-07-12 16:00:05.119533] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:07.925 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:08.183 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:08.184 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:08.184 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.184 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:08.184 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.184 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:08.184 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:08.184 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:08.184 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:08.184 16:00:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:08.184 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.184 16:00:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.555 [2024-07-12 16:00:06.419927] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:09.555 [2024-07-12 16:00:06.419958] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:09.555 [2024-07-12 16:00:06.419981] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:09.555 [2024-07-12 16:00:06.506266] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:09.555 [2024-07-12 16:00:06.574354] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:09.555 [2024-07-12 16:00:06.574398] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.555 request: 00:23:09.555 { 00:23:09.555 "name": "nvme", 00:23:09.555 "trtype": "tcp", 00:23:09.555 "traddr": "10.0.0.2", 00:23:09.555 "adrfam": "ipv4", 00:23:09.555 "trsvcid": "8009", 00:23:09.555 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:09.555 "wait_for_attach": true, 00:23:09.555 "method": "bdev_nvme_start_discovery", 00:23:09.555 "req_id": 1 00:23:09.555 } 00:23:09.555 Got JSON-RPC error response 00:23:09.555 response: 00:23:09.555 { 00:23:09.555 "code": -17, 00:23:09.555 "message": "File exists" 00:23:09.555 } 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.555 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.555 request: 00:23:09.555 { 00:23:09.555 "name": "nvme_second", 00:23:09.555 "trtype": "tcp", 00:23:09.555 "traddr": "10.0.0.2", 00:23:09.555 "adrfam": "ipv4", 00:23:09.555 "trsvcid": "8009", 00:23:09.556 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:09.556 "wait_for_attach": true, 00:23:09.556 "method": "bdev_nvme_start_discovery", 00:23:09.556 "req_id": 1 00:23:09.556 } 00:23:09.556 Got JSON-RPC error response 00:23:09.556 response: 00:23:09.556 { 00:23:09.556 "code": -17, 00:23:09.556 "message": "File exists" 00:23:09.556 } 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.556 16:00:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:10.487 [2024-07-12 16:00:07.766647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.487 [2024-07-12 16:00:07.766705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab5120 with addr=10.0.0.2, port=8010 00:23:10.487 [2024-07-12 16:00:07.766756] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:10.487 [2024-07-12 16:00:07.766781] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:10.487 [2024-07-12 16:00:07.766809] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:11.872 [2024-07-12 16:00:08.768999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:11.872 [2024-07-12 16:00:08.769057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xab5120 with addr=10.0.0.2, port=8010 00:23:11.872 [2024-07-12 16:00:08.769078] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:11.872 [2024-07-12 16:00:08.769115] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:11.872 [2024-07-12 16:00:08.769135] bdev_nvme.c:7050:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:12.818 [2024-07-12 16:00:09.771289] bdev_nvme.c:7031:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:12.818 request: 00:23:12.818 { 00:23:12.819 "name": "nvme_second", 00:23:12.819 "trtype": "tcp", 00:23:12.819 "traddr": "10.0.0.2", 00:23:12.819 "adrfam": "ipv4", 00:23:12.819 "trsvcid": "8010", 00:23:12.819 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:12.819 "wait_for_attach": false, 00:23:12.819 "attach_timeout_ms": 3000, 00:23:12.819 "method": "bdev_nvme_start_discovery", 00:23:12.819 "req_id": 1 00:23:12.819 } 00:23:12.819 Got JSON-RPC error response 00:23:12.819 response: 00:23:12.819 { 00:23:12.819 "code": -110, 00:23:12.819 "message": "Connection timed out" 00:23:12.819 } 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 826677 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:12.819 rmmod nvme_tcp 00:23:12.819 rmmod nvme_fabrics 00:23:12.819 rmmod nvme_keyring 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 826630 ']' 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 826630 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 826630 ']' 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 826630 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 826630 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 826630' 00:23:12.819 killing process with pid 826630 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 826630 00:23:12.819 16:00:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 826630 00:23:13.077 16:00:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:13.077 16:00:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:13.078 16:00:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:13.078 16:00:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:13.078 16:00:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:13.078 16:00:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.078 16:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:13.078 16:00:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.021 16:00:12 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:15.022 00:23:15.022 real 0m13.252s 00:23:15.022 user 0m19.168s 00:23:15.022 sys 0m2.821s 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:15.022 ************************************ 00:23:15.022 END TEST nvmf_host_discovery 00:23:15.022 ************************************ 00:23:15.022 16:00:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:15.022 16:00:12 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:15.022 16:00:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:15.022 16:00:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:15.022 16:00:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:15.022 ************************************ 00:23:15.022 START TEST nvmf_host_multipath_status 00:23:15.022 ************************************ 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:15.022 * Looking for test storage... 00:23:15.022 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:23:15.022 16:00:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:17.585 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:17.586 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:17.586 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:17.586 Found net devices under 0000:84:00.0: cvl_0_0 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:17.586 Found net devices under 0000:84:00.1: cvl_0_1 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:23:17.586 00:23:17.586 --- 10.0.0.2 ping statistics --- 00:23:17.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.586 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:23:17.586 00:23:17.586 --- 10.0.0.1 ping statistics --- 00:23:17.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.586 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=830297 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 830297 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 830297 ']' 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:17.586 [2024-07-12 16:00:14.497854] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:23:17.586 [2024-07-12 16:00:14.497946] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.586 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.586 [2024-07-12 16:00:14.561165] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:17.586 [2024-07-12 16:00:14.661635] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.586 [2024-07-12 16:00:14.661690] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.586 [2024-07-12 16:00:14.661713] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.586 [2024-07-12 16:00:14.661775] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.586 [2024-07-12 16:00:14.661802] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.586 [2024-07-12 16:00:14.661863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.586 [2024-07-12 16:00:14.661868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=830297 00:23:17.586 16:00:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:17.847 [2024-07-12 16:00:15.065222] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.847 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:18.103 Malloc0 00:23:18.359 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:18.616 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:18.873 16:00:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.874 [2024-07-12 16:00:16.150838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.131 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:19.131 [2024-07-12 16:00:16.387423] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:19.131 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=830536 00:23:19.131 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.131 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 830536 /var/tmp/bdevperf.sock 00:23:19.131 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:19.131 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 830536 ']' 00:23:19.131 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.131 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.131 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.131 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.131 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:19.696 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.696 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:23:19.696 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:19.696 16:00:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:20.259 Nvme0n1 00:23:20.260 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:20.516 Nvme0n1 00:23:20.516 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:20.516 16:00:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:23.054 16:00:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:23.054 16:00:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:23.055 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:23.055 16:00:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:24.493 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:24.493 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:24.493 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.493 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:24.493 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.493 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:24.493 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.493 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:24.751 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:24.751 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:24.751 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.751 16:00:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:25.008 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.008 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:25.008 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.008 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:25.265 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.266 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:25.266 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.266 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:25.523 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.523 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:25.523 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.523 16:00:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:25.780 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.781 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:25.781 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:26.038 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:26.602 16:00:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:27.532 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:27.532 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:27.532 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.532 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:27.790 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:27.790 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:27.790 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.790 16:00:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:28.048 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.048 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:28.048 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.048 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:28.306 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.306 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:28.306 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.306 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:28.564 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.564 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:28.564 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.564 16:00:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:28.832 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.832 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:28.832 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.832 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:29.090 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.090 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:29.090 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:29.347 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:29.604 16:00:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:30.976 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:30.976 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:30.976 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.976 16:00:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:30.976 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.976 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:30.976 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.976 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:31.233 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:31.233 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:31.234 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.234 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:31.490 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.490 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:31.490 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.490 16:00:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:31.748 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.748 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:31.748 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.748 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:32.324 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.324 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:32.324 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.324 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:32.324 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.324 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:32.324 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:32.586 16:00:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:33.150 16:00:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:34.082 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:34.082 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:34.082 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.083 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:34.340 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.340 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:34.340 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.340 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:34.598 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:34.598 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:34.598 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.598 16:00:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:34.856 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.856 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:34.856 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.856 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:35.114 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.114 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:35.114 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:35.114 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.372 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.372 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:35.372 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.372 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:35.629 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:35.629 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:35.629 16:00:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:36.192 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:36.193 16:00:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:37.562 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:37.562 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:37.562 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.562 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:37.562 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.562 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:37.562 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.562 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:37.820 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.820 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:37.820 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.820 16:00:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:38.076 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.076 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:38.076 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.076 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:38.334 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.334 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:38.334 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.334 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:38.591 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:38.591 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:38.591 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.591 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:38.848 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:38.848 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:38.848 16:00:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:39.105 16:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:39.363 16:00:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:40.295 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:40.295 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:40.295 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.295 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:40.553 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:40.553 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:40.553 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.553 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:40.811 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:40.811 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:40.811 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:40.811 16:00:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:41.070 16:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.070 16:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:41.070 16:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.070 16:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:41.328 16:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.328 16:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:41.328 16:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.328 16:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:41.586 16:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:41.586 16:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:41.586 16:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.586 16:00:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:41.843 16:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.843 16:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:42.101 16:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:42.102 16:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:42.667 16:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:42.667 16:00:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:44.040 16:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:44.040 16:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:44.040 16:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.040 16:00:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:44.041 16:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.041 16:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:44.041 16:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.041 16:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:44.298 16:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.298 16:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:44.298 16:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.298 16:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:44.556 16:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.556 16:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:44.556 16:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.556 16:00:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:45.129 16:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.129 16:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:45.129 16:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.129 16:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:45.444 16:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.444 16:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:45.444 16:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.444 16:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:45.703 16:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.703 16:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:45.703 16:00:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:45.962 16:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:46.219 16:00:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:47.152 16:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:47.152 16:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:47.152 16:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.152 16:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:47.410 16:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:47.410 16:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:47.410 16:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.410 16:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:47.669 16:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.669 16:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:47.669 16:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.669 16:00:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:47.926 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:47.926 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:47.926 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:47.926 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:48.184 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.184 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:48.184 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.184 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:48.441 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:48.441 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:48.441 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:48.441 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:49.005 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:49.005 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:49.005 16:00:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:49.005 16:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:49.568 16:00:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:50.499 16:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:50.499 16:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:50.499 16:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.499 16:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:50.756 16:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:50.756 16:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:50.756 16:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:50.756 16:00:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:51.012 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.012 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:51.012 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.012 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:51.269 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.269 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:51.269 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.269 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:51.526 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.526 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:51.526 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.526 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:51.784 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:51.784 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:51.784 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:51.784 16:00:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:52.042 16:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:52.042 16:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:52.042 16:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:52.300 16:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:52.865 16:00:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:53.798 16:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:53.798 16:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:53.798 16:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:53.798 16:00:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:54.056 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.056 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:54.056 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.056 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:54.314 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:54.314 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:54.314 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.314 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:54.571 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.572 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:54.572 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.572 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:54.829 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:54.829 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:54.829 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:54.829 16:00:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:55.087 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:55.087 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:55.087 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:55.087 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:55.345 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:55.345 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 830536 00:23:55.345 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 830536 ']' 00:23:55.345 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 830536 00:23:55.345 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:55.345 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.345 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 830536 00:23:55.345 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:55.345 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:55.345 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 830536' 00:23:55.345 killing process with pid 830536 00:23:55.345 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 830536 00:23:55.345 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 830536 00:23:55.615 Connection closed with partial response: 00:23:55.615 00:23:55.615 00:23:55.615 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 830536 00:23:55.615 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:55.615 [2024-07-12 16:00:16.449296] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:23:55.615 [2024-07-12 16:00:16.449388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830536 ] 00:23:55.615 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.615 [2024-07-12 16:00:16.507773] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.615 [2024-07-12 16:00:16.614819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.615 Running I/O for 90 seconds... 00:23:55.615 [2024-07-12 16:00:33.166580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.615 [2024-07-12 16:00:33.166649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.168073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.615 [2024-07-12 16:00:33.168112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.168141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.168175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.168202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.168219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.168244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.168261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.168286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.168303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.168328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.168344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.168370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.168386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.168412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.168429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.168956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.168982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.615 [2024-07-12 16:00:33.169761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:55.615 [2024-07-12 16:00:33.169789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.169806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.169833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.169849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.169875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.169892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.169919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.169936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.169962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.169979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.616 [2024-07-12 16:00:33.170521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.170966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.170995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:55.616 [2024-07-12 16:00:33.171906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.616 [2024-07-12 16:00:33.171923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.171953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.171970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:33.172748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:33.172768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.837565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.837640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.837693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.837711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.837733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.837774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.837806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.837823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.837845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.837862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.837884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.837900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.837922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.837938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.837960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.837976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.837998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.838014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.838066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.838116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.838154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.838191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.617 [2024-07-12 16:00:49.838228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:55.617 [2024-07-12 16:00:49.838736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.617 [2024-07-12 16:00:49.838774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.838798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.838815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.838837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.838853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.838875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.618 [2024-07-12 16:00:49.838890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.838913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.838929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.838950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.838966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.838988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.839969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.839992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.618 [2024-07-12 16:00:49.840104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.618 [2024-07-12 16:00:49.840409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.618 [2024-07-12 16:00:49.840577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.840599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.618 [2024-07-12 16:00:49.840614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.841423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.618 [2024-07-12 16:00:49.841447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.841473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.618 [2024-07-12 16:00:49.841490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.841512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.618 [2024-07-12 16:00:49.841527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.841549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.618 [2024-07-12 16:00:49.841565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.841587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.618 [2024-07-12 16:00:49.841602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.841623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.618 [2024-07-12 16:00:49.841639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:55.618 [2024-07-12 16:00:49.841660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.618 [2024-07-12 16:00:49.841676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.841697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.841717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.841762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.841781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.841805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.841830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.841852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.841868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.841890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.841907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.841929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.841945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.841967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.841983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.842975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.842993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.843015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.843046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.843069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.843084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.843106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.843122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.843143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.619 [2024-07-12 16:00:49.843159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.843185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.843202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.843224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.843239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.843261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.619 [2024-07-12 16:00:49.843276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:55.619 [2024-07-12 16:00:49.843298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.843314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.843336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.843351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.843373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.843388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.845098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.845142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.845180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.845780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.620 [2024-07-12 16:00:49.845802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.620 [2024-07-12 16:00:49.848631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:55.620 [2024-07-12 16:00:49.848667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.848684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.848706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.848722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.848765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.848784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.848808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.848824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.848847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.848862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.848885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.848901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.848923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.848939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.848961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.848976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.849002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.849018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.849040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.849078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.849105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.849121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.849142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.849157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.849179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.849194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.849230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.849247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.849270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.849285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.849308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.849324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.849909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.849934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.849962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.849980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.850002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.850019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.850041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.850057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.850079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.850095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.850117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.850133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.850161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.850177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.850200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.850215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.850237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.850253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.850275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.850291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.850312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.850343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.850365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.850380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.850402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.850417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.851946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.851973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.852019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.852058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.852096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.852133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.852192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.852231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.852268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.621 [2024-07-12 16:00:49.852305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.852341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.852377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.852413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.621 [2024-07-12 16:00:49.852449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:55.621 [2024-07-12 16:00:49.852471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.622 [2024-07-12 16:00:49.852486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.852507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.622 [2024-07-12 16:00:49.852522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.852543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.622 [2024-07-12 16:00:49.852559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.852580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.622 [2024-07-12 16:00:49.852595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.852616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.852636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.852658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.852674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.852710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.852727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.852759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.852777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.852800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.852816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.852839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.852854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.856982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.856998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.857052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.857089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.857127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.857163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.622 [2024-07-12 16:00:49.857200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.622 [2024-07-12 16:00:49.857238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.622 [2024-07-12 16:00:49.857281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.622 [2024-07-12 16:00:49.857317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.857355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.857391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.857428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.622 [2024-07-12 16:00:49.857478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.857514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.857550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.857584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.857619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.622 [2024-07-12 16:00:49.857654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.622 [2024-07-12 16:00:49.857689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.622 [2024-07-12 16:00:49.857752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:55.622 [2024-07-12 16:00:49.857778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.857794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.857819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.857835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.857857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.857872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.857894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.857910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.857932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.857947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.857970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.857986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.858042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.858082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.858134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.858170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.858205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.858245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.858281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.623 [2024-07-12 16:00:49.858316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.623 [2024-07-12 16:00:49.858352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.623 [2024-07-12 16:00:49.858389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.623 [2024-07-12 16:00:49.858425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.623 [2024-07-12 16:00:49.858461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.623 [2024-07-12 16:00:49.858496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.858532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.858569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.858605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.858641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.858662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.858681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.861000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.861041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.861069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.861086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.861107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.861122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.861143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.623 [2024-07-12 16:00:49.861158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.861179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.623 [2024-07-12 16:00:49.861194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.861215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.623 [2024-07-12 16:00:49.861230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.861250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.623 [2024-07-12 16:00:49.861266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.861287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.623 [2024-07-12 16:00:49.861302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.861322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.861338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.861358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.861373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.861394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.623 [2024-07-12 16:00:49.861409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:55.623 [2024-07-12 16:00:49.861429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.861444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.861486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.861521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.861556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.861592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.861627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.861662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.861698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.861758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.861806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.861844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.861882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.861919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.861963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.861986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.862002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.862042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.862058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.862079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.862109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.862131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.862146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.862167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.862181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.862715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.862760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.862789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.862806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.862829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.862845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.862867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.862884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.862906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.862922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.862944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.862959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.862982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.863003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.863061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.863111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.863148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.863183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.863219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.863255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.863290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.863326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.863361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.863396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.624 [2024-07-12 16:00:49.863431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.863470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.863507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.863542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.863578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.624 [2024-07-12 16:00:49.863613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:55.624 [2024-07-12 16:00:49.863634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.863649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.863670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.863686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.864318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.864361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.864397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.864433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.864469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.864504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.864546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.864581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.864617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.864653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.864688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.864748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.864789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.864827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.864865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.864902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.864939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.864961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.864977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.865020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.865073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.865125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.865161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.865198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.865630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.865672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.865708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.865770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.865808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.865846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.865884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.865927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.865966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.865988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.866004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.866043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.866059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.866081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.866111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.866133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.866148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.866169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.625 [2024-07-12 16:00:49.866184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.867824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.867849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.867877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.867894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:55.625 [2024-07-12 16:00:49.867917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.625 [2024-07-12 16:00:49.867933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.867955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.867971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.867993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.868008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.868074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.868128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.868163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.868198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.868234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.868269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.868305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.868340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.868375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.868411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.868446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.868482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.868517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.868558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.868593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.868628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.868663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.868700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.868763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.868788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.868804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.870495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.870518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.870559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.870576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.870597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.870612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.870633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.870648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.870668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.870684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.870709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.870748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.870773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.870790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.870812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.870828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.870849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.870865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.870887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.870903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.870925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.870941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.870963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.870979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.871000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.871017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.871054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.871069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.871089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.871104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.871125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.871140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.871160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.626 [2024-07-12 16:00:49.871175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.871196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.871217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.871239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.871254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:55.626 [2024-07-12 16:00:49.871275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.626 [2024-07-12 16:00:49.871290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.871310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.871325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.871346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.871360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.871381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.871396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.872210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.872253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.872289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.872325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.872361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.872396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.872436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.872473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.872509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.872544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.872580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.872615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.872650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.872685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.872744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.872786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.872824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.872861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.872900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.872943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.872965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.872981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.873004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.873020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.874363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.874386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.874428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.874445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.874465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.874480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.874500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.874515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.874536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.627 [2024-07-12 16:00:49.874550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.874571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.874585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.874607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.874622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.874643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.874658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.874678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.874693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.874733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.874763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.874797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.627 [2024-07-12 16:00:49.874813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:55.627 [2024-07-12 16:00:49.874835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.874851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.874873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.874889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.874911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.874926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.874948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.874964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.874986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.875001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.875053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.875105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.875141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.875176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.875211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.875250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.875286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.875321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.875356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.875391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.875427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.875462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.875498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.875533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.875569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.875590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.875605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.878703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.878750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.878804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.878828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.878852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.878869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.878891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.878906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.878929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.878945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.878967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.878983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.879021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.879074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.879126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.879162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.879197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.879232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.879267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.879302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.879343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.879379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.879414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.628 [2024-07-12 16:00:49.879450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.879485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.628 [2024-07-12 16:00:49.879520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.628 [2024-07-12 16:00:49.879541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.879556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.879577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.879592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.879612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.879627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.879648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.879662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.879684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.879698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.879734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.879758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.879787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.879804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.879826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.879842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.879864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.879879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.879901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.879917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.879939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.879954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.879976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.879992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.880044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.880080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.880115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.880151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.880186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.880221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.880262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.880297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.880334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.880370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.880405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.880440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.880476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.880497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.880512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.881247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.881269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.881295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.881311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.881332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.881347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.881369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.881384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.881404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.881423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.881445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.881461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.881481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.881496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.881516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.881531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.881552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.881567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.881588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.881603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.881625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.881640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.882116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.882139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.882166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.629 [2024-07-12 16:00:49.882183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.882204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.882220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.882257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.882272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.882293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.629 [2024-07-12 16:00:49.882309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:55.629 [2024-07-12 16:00:49.882329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.882349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.882371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.882386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.882407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.882422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.882443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.882457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.882478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.882493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.882514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.882528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.882549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.882564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.882585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.882600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.883752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.883777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.883820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.883838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.883861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.883877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.883899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.883915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.883937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.883953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.883981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.883997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.884102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.884139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.884210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.884423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.884464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.884499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.884535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:78512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.884711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.884771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.884942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.884965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.884982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.885004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.630 [2024-07-12 16:00:49.885035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:55.630 [2024-07-12 16:00:49.885057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.630 [2024-07-12 16:00:49.885072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.885109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.885124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.885145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.885159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.885180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.885195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.885216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.885231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.886675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.886699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.886750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.886784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.886809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.886825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.886847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.886863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.886885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.886908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.886931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.886947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.886969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.886985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.887007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.887023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.887045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.887061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.887098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.887114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.887151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.887166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.887187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.887202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.887223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.887238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.887259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.887273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.888356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.888413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.888449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.888500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.888536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.888572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.888607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.888643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.888678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.888714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.888777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.888815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.888853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.888891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.888930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.888973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.888996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.889012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.889050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.631 [2024-07-12 16:00:49.889066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.889102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.889118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.889139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.889154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.889174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.889189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.889210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.889225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.889246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.889260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.631 [2024-07-12 16:00:49.889281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.631 [2024-07-12 16:00:49.889296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.889332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.889367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.889403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.889442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.889478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.889514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.889549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.889585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.889620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.889655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.889691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.889750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.889791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.889814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.889830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.891475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.891499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.891561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.891586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.891609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.891625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.891645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.891660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.891681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.891696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.891731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.891755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.891779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.891795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.891817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.891833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.892860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.892883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.892926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.892943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.892965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.892980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.893002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:78520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.893017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.893039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.893069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.893091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.893106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.893132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.893148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.893169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.893183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.893204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.893219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.893240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.893255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.893275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.893291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.893311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.632 [2024-07-12 16:00:49.893326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:55.632 [2024-07-12 16:00:49.893347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.632 [2024-07-12 16:00:49.893362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.633 [2024-07-12 16:00:49.893397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.633 [2024-07-12 16:00:49.893432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.633 [2024-07-12 16:00:49.893467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.633 [2024-07-12 16:00:49.893503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.633 [2024-07-12 16:00:49.893539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.633 [2024-07-12 16:00:49.893579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.633 [2024-07-12 16:00:49.893615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.633 [2024-07-12 16:00:49.893650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.633 [2024-07-12 16:00:49.893686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.633 [2024-07-12 16:00:49.893743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.633 [2024-07-12 16:00:49.893785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.633 [2024-07-12 16:00:49.893823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.633 [2024-07-12 16:00:49.893861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.633 [2024-07-12 16:00:49.893898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.633 [2024-07-12 16:00:49.893937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.633 [2024-07-12 16:00:49.893975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.893997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.633 [2024-07-12 16:00:49.894013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.894055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.633 [2024-07-12 16:00:49.894072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.894108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.633 [2024-07-12 16:00:49.894123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.894144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.633 [2024-07-12 16:00:49.894159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:55.633 [2024-07-12 16:00:49.894180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:55.633 [2024-07-12 16:00:49.894196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:55.633 Received shutdown signal, test time was about 34.590949 seconds 00:23:55.633 00:23:55.633 Latency(us) 00:23:55.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.633 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:55.633 Verification LBA range: start 0x0 length 0x4000 00:23:55.633 Nvme0n1 : 34.59 8581.17 33.52 0.00 0.00 14892.11 227.56 4026531.84 00:23:55.633 =================================================================================================================== 00:23:55.633 Total : 8581.17 33.52 0.00 0.00 14892.11 227.56 4026531.84 00:23:55.633 16:00:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:55.891 rmmod nvme_tcp 00:23:55.891 rmmod nvme_fabrics 00:23:55.891 rmmod nvme_keyring 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 830297 ']' 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 830297 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 830297 ']' 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 830297 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 830297 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 830297' 00:23:55.891 killing process with pid 830297 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 830297 00:23:55.891 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 830297 00:23:56.457 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:56.457 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:56.457 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:56.457 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.457 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:56.457 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.457 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.457 16:00:53 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.358 16:00:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.358 00:23:58.358 real 0m43.256s 00:23:58.358 user 2m10.873s 00:23:58.358 sys 0m11.904s 00:23:58.358 16:00:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:58.358 16:00:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:58.358 ************************************ 00:23:58.358 END TEST nvmf_host_multipath_status 00:23:58.358 ************************************ 00:23:58.358 16:00:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:58.358 16:00:55 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:58.358 16:00:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:58.358 16:00:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:58.358 16:00:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:58.358 ************************************ 00:23:58.358 START TEST nvmf_discovery_remove_ifc 00:23:58.358 ************************************ 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:58.358 * Looking for test storage... 00:23:58.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:58.358 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.359 16:00:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:00.889 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:00.889 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:00.889 Found net devices under 0000:84:00.0: cvl_0_0 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:00.889 Found net devices under 0000:84:00.1: cvl_0_1 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:24:00.889 00:24:00.889 --- 10.0.0.2 ping statistics --- 00:24:00.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.889 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:24:00.889 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:24:00.889 00:24:00.889 --- 10.0.0.1 ping statistics --- 00:24:00.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.889 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=837014 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 837014 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 837014 ']' 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.890 16:00:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.890 [2024-07-12 16:00:57.845707] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:24:00.890 [2024-07-12 16:00:57.845833] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.890 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.890 [2024-07-12 16:00:57.911302] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.890 [2024-07-12 16:00:58.021254] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.890 [2024-07-12 16:00:58.021310] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.890 [2024-07-12 16:00:58.021343] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.890 [2024-07-12 16:00:58.021355] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.890 [2024-07-12 16:00:58.021365] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.890 [2024-07-12 16:00:58.021407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.890 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.890 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:00.890 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.890 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.890 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.890 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.890 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:00.890 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.890 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.890 [2024-07-12 16:00:58.175140] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.148 [2024-07-12 16:00:58.183329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:01.148 null0 00:24:01.148 [2024-07-12 16:00:58.215261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.148 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.148 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=837038 00:24:01.148 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 837038 /tmp/host.sock 00:24:01.148 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 837038 ']' 00:24:01.148 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:01.148 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:01.148 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:01.148 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:01.148 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:01.148 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:01.148 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.148 [2024-07-12 16:00:58.284587] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:24:01.148 [2024-07-12 16:00:58.284664] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837038 ] 00:24:01.148 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.148 [2024-07-12 16:00:58.345991] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.406 [2024-07-12 16:00:58.460728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.406 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.406 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:01.406 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:01.406 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:01.406 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.406 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.407 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.407 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:01.407 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.407 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:01.407 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.407 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:01.407 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.407 16:00:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:02.778 [2024-07-12 16:00:59.652390] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:02.778 [2024-07-12 16:00:59.652417] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:02.778 [2024-07-12 16:00:59.652437] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:02.778 [2024-07-12 16:00:59.779891] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:02.778 [2024-07-12 16:00:59.966955] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:02.778 [2024-07-12 16:00:59.967037] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:02.778 [2024-07-12 16:00:59.967078] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:02.778 [2024-07-12 16:00:59.967101] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:02.778 [2024-07-12 16:00:59.967138] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:02.778 16:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.778 16:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:02.778 16:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:02.778 16:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:02.778 16:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:02.778 16:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.778 16:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:02.778 16:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:02.778 [2024-07-12 16:00:59.971451] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x79c110 was disconnected and freed. delete nvme_qpair. 00:24:02.778 16:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:02.778 16:00:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.778 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:02.778 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:02.778 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:02.778 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:02.778 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:02.778 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:02.778 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:02.778 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.036 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.036 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:03.036 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:03.036 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.036 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:03.036 16:01:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:03.968 16:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:03.968 16:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.968 16:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.968 16:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:03.968 16:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.968 16:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:03.968 16:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:03.968 16:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.968 16:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:03.968 16:01:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:04.900 16:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:04.900 16:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.900 16:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.900 16:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:04.900 16:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:04.900 16:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:04.900 16:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:04.900 16:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.157 16:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:05.157 16:01:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:06.088 16:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:06.088 16:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.088 16:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:06.088 16:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.088 16:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:06.088 16:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:06.088 16:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:06.088 16:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.088 16:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:06.088 16:01:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:07.019 16:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:07.019 16:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:07.020 16:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:07.020 16:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:07.020 16:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.020 16:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:07.020 16:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:07.020 16:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.020 16:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:07.020 16:01:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:08.016 16:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:08.016 16:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:08.016 16:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:08.016 16:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.016 16:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:08.016 16:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:08.016 16:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:08.274 16:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.274 16:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:08.274 16:01:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:08.274 [2024-07-12 16:01:05.408416] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:08.274 [2024-07-12 16:01:05.408495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.274 [2024-07-12 16:01:05.408515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.274 [2024-07-12 16:01:05.408546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.274 [2024-07-12 16:01:05.408560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.274 [2024-07-12 16:01:05.408574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.274 [2024-07-12 16:01:05.408587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.274 [2024-07-12 16:01:05.408600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.274 [2024-07-12 16:01:05.408613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.274 [2024-07-12 16:01:05.408626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:08.274 [2024-07-12 16:01:05.408639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:08.274 [2024-07-12 16:01:05.408651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762b30 is same with the state(5) to be set 00:24:08.274 [2024-07-12 16:01:05.418433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762b30 (9): Bad file descriptor 00:24:08.274 [2024-07-12 16:01:05.428475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:09.208 16:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:09.208 16:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:09.208 16:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.208 16:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.208 16:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:09.208 16:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:09.208 16:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:09.208 [2024-07-12 16:01:06.439804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:09.208 [2024-07-12 16:01:06.439886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x762b30 with addr=10.0.0.2, port=4420 00:24:09.208 [2024-07-12 16:01:06.439912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x762b30 is same with the state(5) to be set 00:24:09.208 [2024-07-12 16:01:06.439958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x762b30 (9): Bad file descriptor 00:24:09.208 [2024-07-12 16:01:06.440409] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.208 [2024-07-12 16:01:06.440441] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:09.208 [2024-07-12 16:01:06.440456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:09.208 [2024-07-12 16:01:06.440472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:09.208 [2024-07-12 16:01:06.440502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.208 [2024-07-12 16:01:06.440519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:09.208 16:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.208 16:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:09.208 16:01:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:10.580 [2024-07-12 16:01:07.443033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:10.580 [2024-07-12 16:01:07.443071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:10.580 [2024-07-12 16:01:07.443085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:10.580 [2024-07-12 16:01:07.443098] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:10.580 [2024-07-12 16:01:07.443121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:10.580 [2024-07-12 16:01:07.443161] bdev_nvme.c:6739:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:10.580 [2024-07-12 16:01:07.443202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.580 [2024-07-12 16:01:07.443222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.580 [2024-07-12 16:01:07.443239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.580 [2024-07-12 16:01:07.443252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.580 [2024-07-12 16:01:07.443264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.580 [2024-07-12 16:01:07.443277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.580 [2024-07-12 16:01:07.443290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.580 [2024-07-12 16:01:07.443302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.580 [2024-07-12 16:01:07.443315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.580 [2024-07-12 16:01:07.443328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.580 [2024-07-12 16:01:07.443340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:10.580 [2024-07-12 16:01:07.443473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x761fb0 (9): Bad file descriptor 00:24:10.580 [2024-07-12 16:01:07.444489] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:10.580 [2024-07-12 16:01:07.444511] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:10.580 16:01:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:11.518 16:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:11.518 16:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.518 16:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:11.518 16:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.518 16:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:11.518 16:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:11.518 16:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:11.518 16:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.518 16:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:11.518 16:01:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:12.449 [2024-07-12 16:01:09.499898] bdev_nvme.c:6988:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:12.449 [2024-07-12 16:01:09.499933] bdev_nvme.c:7068:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:12.449 [2024-07-12 16:01:09.499956] bdev_nvme.c:6951:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:12.449 [2024-07-12 16:01:09.587236] bdev_nvme.c:6917:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:12.449 16:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:12.449 16:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.449 16:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:12.449 16:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.449 16:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:12.449 16:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:12.449 16:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:12.449 16:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.449 16:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:12.449 16:01:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:12.449 [2024-07-12 16:01:09.689397] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:12.449 [2024-07-12 16:01:09.689448] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:12.449 [2024-07-12 16:01:09.689484] bdev_nvme.c:7778:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:12.449 [2024-07-12 16:01:09.689506] bdev_nvme.c:6807:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:12.449 [2024-07-12 16:01:09.689519] bdev_nvme.c:6766:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:12.449 [2024-07-12 16:01:09.697125] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7a7e30 was disconnected and freed. delete nvme_qpair. 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 837038 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 837038 ']' 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 837038 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 837038 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 837038' 00:24:13.820 killing process with pid 837038 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 837038 00:24:13.820 16:01:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 837038 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:13.820 rmmod nvme_tcp 00:24:13.820 rmmod nvme_fabrics 00:24:13.820 rmmod nvme_keyring 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 837014 ']' 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 837014 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 837014 ']' 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 837014 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 837014 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 837014' 00:24:13.820 killing process with pid 837014 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 837014 00:24:13.820 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 837014 00:24:14.078 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:14.078 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:14.078 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:14.078 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:14.078 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:14.078 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.078 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.078 16:01:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.612 16:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:16.612 00:24:16.612 real 0m17.821s 00:24:16.612 user 0m25.840s 00:24:16.612 sys 0m3.079s 00:24:16.612 16:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:16.612 16:01:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:16.612 ************************************ 00:24:16.612 END TEST nvmf_discovery_remove_ifc 00:24:16.612 ************************************ 00:24:16.612 16:01:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:16.612 16:01:13 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:16.612 16:01:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:16.612 16:01:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:16.612 16:01:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:16.612 ************************************ 00:24:16.612 START TEST nvmf_identify_kernel_target 00:24:16.612 ************************************ 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:16.612 * Looking for test storage... 00:24:16.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:24:16.612 16:01:13 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:18.514 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:18.514 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:18.514 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:18.515 Found net devices under 0000:84:00.0: cvl_0_0 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:18.515 Found net devices under 0000:84:00.1: cvl_0_1 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:18.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:24:18.515 00:24:18.515 --- 10.0.0.2 ping statistics --- 00:24:18.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.515 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:24:18.515 00:24:18.515 --- 10.0.0.1 ping statistics --- 00:24:18.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.515 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:18.515 16:01:15 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:19.889 Waiting for block devices as requested 00:24:19.889 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:24:19.889 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:20.148 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:20.148 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:20.406 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:20.406 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:20.406 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:20.406 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:20.666 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:20.666 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:20.666 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:20.666 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:20.925 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:20.925 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:20.925 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:21.184 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:21.184 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:21.184 No valid GPT data, bailing 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:21.184 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:24:21.443 00:24:21.443 Discovery Log Number of Records 2, Generation counter 2 00:24:21.443 =====Discovery Log Entry 0====== 00:24:21.443 trtype: tcp 00:24:21.443 adrfam: ipv4 00:24:21.443 subtype: current discovery subsystem 00:24:21.443 treq: not specified, sq flow control disable supported 00:24:21.443 portid: 1 00:24:21.443 trsvcid: 4420 00:24:21.443 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:21.443 traddr: 10.0.0.1 00:24:21.443 eflags: none 00:24:21.443 sectype: none 00:24:21.443 =====Discovery Log Entry 1====== 00:24:21.443 trtype: tcp 00:24:21.443 adrfam: ipv4 00:24:21.443 subtype: nvme subsystem 00:24:21.443 treq: not specified, sq flow control disable supported 00:24:21.443 portid: 1 00:24:21.443 trsvcid: 4420 00:24:21.443 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:21.443 traddr: 10.0.0.1 00:24:21.443 eflags: none 00:24:21.443 sectype: none 00:24:21.443 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:21.443 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:21.443 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.443 ===================================================== 00:24:21.443 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:21.443 ===================================================== 00:24:21.443 Controller Capabilities/Features 00:24:21.443 ================================ 00:24:21.443 Vendor ID: 0000 00:24:21.443 Subsystem Vendor ID: 0000 00:24:21.443 Serial Number: 07d0004effdde4153d73 00:24:21.443 Model Number: Linux 00:24:21.443 Firmware Version: 6.7.0-68 00:24:21.443 Recommended Arb Burst: 0 00:24:21.443 IEEE OUI Identifier: 00 00 00 00:24:21.443 Multi-path I/O 00:24:21.443 May have multiple subsystem ports: No 00:24:21.443 May have multiple controllers: No 00:24:21.443 Associated with SR-IOV VF: No 00:24:21.443 Max Data Transfer Size: Unlimited 00:24:21.443 Max Number of Namespaces: 0 00:24:21.443 Max Number of I/O Queues: 1024 00:24:21.444 NVMe Specification Version (VS): 1.3 00:24:21.444 NVMe Specification Version (Identify): 1.3 00:24:21.444 Maximum Queue Entries: 1024 00:24:21.444 Contiguous Queues Required: No 00:24:21.444 Arbitration Mechanisms Supported 00:24:21.444 Weighted Round Robin: Not Supported 00:24:21.444 Vendor Specific: Not Supported 00:24:21.444 Reset Timeout: 7500 ms 00:24:21.444 Doorbell Stride: 4 bytes 00:24:21.444 NVM Subsystem Reset: Not Supported 00:24:21.444 Command Sets Supported 00:24:21.444 NVM Command Set: Supported 00:24:21.444 Boot Partition: Not Supported 00:24:21.444 Memory Page Size Minimum: 4096 bytes 00:24:21.444 Memory Page Size Maximum: 4096 bytes 00:24:21.444 Persistent Memory Region: Not Supported 00:24:21.444 Optional Asynchronous Events Supported 00:24:21.444 Namespace Attribute Notices: Not Supported 00:24:21.444 Firmware Activation Notices: Not Supported 00:24:21.444 ANA Change Notices: Not Supported 00:24:21.444 PLE Aggregate Log Change Notices: Not Supported 00:24:21.444 LBA Status Info Alert Notices: Not Supported 00:24:21.444 EGE Aggregate Log Change Notices: Not Supported 00:24:21.444 Normal NVM Subsystem Shutdown event: Not Supported 00:24:21.444 Zone Descriptor Change Notices: Not Supported 00:24:21.444 Discovery Log Change Notices: Supported 00:24:21.444 Controller Attributes 00:24:21.444 128-bit Host Identifier: Not Supported 00:24:21.444 Non-Operational Permissive Mode: Not Supported 00:24:21.444 NVM Sets: Not Supported 00:24:21.444 Read Recovery Levels: Not Supported 00:24:21.444 Endurance Groups: Not Supported 00:24:21.444 Predictable Latency Mode: Not Supported 00:24:21.444 Traffic Based Keep ALive: Not Supported 00:24:21.444 Namespace Granularity: Not Supported 00:24:21.444 SQ Associations: Not Supported 00:24:21.444 UUID List: Not Supported 00:24:21.444 Multi-Domain Subsystem: Not Supported 00:24:21.444 Fixed Capacity Management: Not Supported 00:24:21.444 Variable Capacity Management: Not Supported 00:24:21.444 Delete Endurance Group: Not Supported 00:24:21.444 Delete NVM Set: Not Supported 00:24:21.444 Extended LBA Formats Supported: Not Supported 00:24:21.444 Flexible Data Placement Supported: Not Supported 00:24:21.444 00:24:21.444 Controller Memory Buffer Support 00:24:21.444 ================================ 00:24:21.444 Supported: No 00:24:21.444 00:24:21.444 Persistent Memory Region Support 00:24:21.444 ================================ 00:24:21.444 Supported: No 00:24:21.444 00:24:21.444 Admin Command Set Attributes 00:24:21.444 ============================ 00:24:21.444 Security Send/Receive: Not Supported 00:24:21.444 Format NVM: Not Supported 00:24:21.444 Firmware Activate/Download: Not Supported 00:24:21.444 Namespace Management: Not Supported 00:24:21.444 Device Self-Test: Not Supported 00:24:21.444 Directives: Not Supported 00:24:21.444 NVMe-MI: Not Supported 00:24:21.444 Virtualization Management: Not Supported 00:24:21.444 Doorbell Buffer Config: Not Supported 00:24:21.444 Get LBA Status Capability: Not Supported 00:24:21.444 Command & Feature Lockdown Capability: Not Supported 00:24:21.444 Abort Command Limit: 1 00:24:21.444 Async Event Request Limit: 1 00:24:21.444 Number of Firmware Slots: N/A 00:24:21.444 Firmware Slot 1 Read-Only: N/A 00:24:21.444 Firmware Activation Without Reset: N/A 00:24:21.444 Multiple Update Detection Support: N/A 00:24:21.444 Firmware Update Granularity: No Information Provided 00:24:21.444 Per-Namespace SMART Log: No 00:24:21.444 Asymmetric Namespace Access Log Page: Not Supported 00:24:21.444 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:21.444 Command Effects Log Page: Not Supported 00:24:21.444 Get Log Page Extended Data: Supported 00:24:21.444 Telemetry Log Pages: Not Supported 00:24:21.444 Persistent Event Log Pages: Not Supported 00:24:21.444 Supported Log Pages Log Page: May Support 00:24:21.444 Commands Supported & Effects Log Page: Not Supported 00:24:21.444 Feature Identifiers & Effects Log Page:May Support 00:24:21.444 NVMe-MI Commands & Effects Log Page: May Support 00:24:21.444 Data Area 4 for Telemetry Log: Not Supported 00:24:21.444 Error Log Page Entries Supported: 1 00:24:21.444 Keep Alive: Not Supported 00:24:21.444 00:24:21.444 NVM Command Set Attributes 00:24:21.444 ========================== 00:24:21.444 Submission Queue Entry Size 00:24:21.444 Max: 1 00:24:21.444 Min: 1 00:24:21.444 Completion Queue Entry Size 00:24:21.444 Max: 1 00:24:21.444 Min: 1 00:24:21.444 Number of Namespaces: 0 00:24:21.444 Compare Command: Not Supported 00:24:21.444 Write Uncorrectable Command: Not Supported 00:24:21.444 Dataset Management Command: Not Supported 00:24:21.444 Write Zeroes Command: Not Supported 00:24:21.444 Set Features Save Field: Not Supported 00:24:21.444 Reservations: Not Supported 00:24:21.444 Timestamp: Not Supported 00:24:21.444 Copy: Not Supported 00:24:21.444 Volatile Write Cache: Not Present 00:24:21.444 Atomic Write Unit (Normal): 1 00:24:21.444 Atomic Write Unit (PFail): 1 00:24:21.444 Atomic Compare & Write Unit: 1 00:24:21.444 Fused Compare & Write: Not Supported 00:24:21.444 Scatter-Gather List 00:24:21.444 SGL Command Set: Supported 00:24:21.444 SGL Keyed: Not Supported 00:24:21.444 SGL Bit Bucket Descriptor: Not Supported 00:24:21.444 SGL Metadata Pointer: Not Supported 00:24:21.444 Oversized SGL: Not Supported 00:24:21.444 SGL Metadata Address: Not Supported 00:24:21.444 SGL Offset: Supported 00:24:21.444 Transport SGL Data Block: Not Supported 00:24:21.444 Replay Protected Memory Block: Not Supported 00:24:21.444 00:24:21.444 Firmware Slot Information 00:24:21.444 ========================= 00:24:21.444 Active slot: 0 00:24:21.444 00:24:21.444 00:24:21.444 Error Log 00:24:21.444 ========= 00:24:21.444 00:24:21.444 Active Namespaces 00:24:21.444 ================= 00:24:21.444 Discovery Log Page 00:24:21.444 ================== 00:24:21.444 Generation Counter: 2 00:24:21.444 Number of Records: 2 00:24:21.444 Record Format: 0 00:24:21.444 00:24:21.444 Discovery Log Entry 0 00:24:21.444 ---------------------- 00:24:21.444 Transport Type: 3 (TCP) 00:24:21.444 Address Family: 1 (IPv4) 00:24:21.444 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:21.444 Entry Flags: 00:24:21.444 Duplicate Returned Information: 0 00:24:21.444 Explicit Persistent Connection Support for Discovery: 0 00:24:21.444 Transport Requirements: 00:24:21.444 Secure Channel: Not Specified 00:24:21.444 Port ID: 1 (0x0001) 00:24:21.444 Controller ID: 65535 (0xffff) 00:24:21.444 Admin Max SQ Size: 32 00:24:21.444 Transport Service Identifier: 4420 00:24:21.444 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:21.444 Transport Address: 10.0.0.1 00:24:21.444 Discovery Log Entry 1 00:24:21.444 ---------------------- 00:24:21.444 Transport Type: 3 (TCP) 00:24:21.444 Address Family: 1 (IPv4) 00:24:21.444 Subsystem Type: 2 (NVM Subsystem) 00:24:21.444 Entry Flags: 00:24:21.444 Duplicate Returned Information: 0 00:24:21.444 Explicit Persistent Connection Support for Discovery: 0 00:24:21.444 Transport Requirements: 00:24:21.444 Secure Channel: Not Specified 00:24:21.444 Port ID: 1 (0x0001) 00:24:21.444 Controller ID: 65535 (0xffff) 00:24:21.444 Admin Max SQ Size: 32 00:24:21.444 Transport Service Identifier: 4420 00:24:21.444 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:21.444 Transport Address: 10.0.0.1 00:24:21.444 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:21.444 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.703 get_feature(0x01) failed 00:24:21.703 get_feature(0x02) failed 00:24:21.703 get_feature(0x04) failed 00:24:21.703 ===================================================== 00:24:21.703 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:21.703 ===================================================== 00:24:21.703 Controller Capabilities/Features 00:24:21.703 ================================ 00:24:21.703 Vendor ID: 0000 00:24:21.703 Subsystem Vendor ID: 0000 00:24:21.703 Serial Number: eb1f4cdff598f7691e57 00:24:21.703 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:21.703 Firmware Version: 6.7.0-68 00:24:21.703 Recommended Arb Burst: 6 00:24:21.703 IEEE OUI Identifier: 00 00 00 00:24:21.703 Multi-path I/O 00:24:21.703 May have multiple subsystem ports: Yes 00:24:21.703 May have multiple controllers: Yes 00:24:21.703 Associated with SR-IOV VF: No 00:24:21.703 Max Data Transfer Size: Unlimited 00:24:21.703 Max Number of Namespaces: 1024 00:24:21.703 Max Number of I/O Queues: 128 00:24:21.703 NVMe Specification Version (VS): 1.3 00:24:21.703 NVMe Specification Version (Identify): 1.3 00:24:21.703 Maximum Queue Entries: 1024 00:24:21.703 Contiguous Queues Required: No 00:24:21.703 Arbitration Mechanisms Supported 00:24:21.703 Weighted Round Robin: Not Supported 00:24:21.703 Vendor Specific: Not Supported 00:24:21.703 Reset Timeout: 7500 ms 00:24:21.703 Doorbell Stride: 4 bytes 00:24:21.703 NVM Subsystem Reset: Not Supported 00:24:21.703 Command Sets Supported 00:24:21.703 NVM Command Set: Supported 00:24:21.703 Boot Partition: Not Supported 00:24:21.703 Memory Page Size Minimum: 4096 bytes 00:24:21.703 Memory Page Size Maximum: 4096 bytes 00:24:21.703 Persistent Memory Region: Not Supported 00:24:21.703 Optional Asynchronous Events Supported 00:24:21.703 Namespace Attribute Notices: Supported 00:24:21.703 Firmware Activation Notices: Not Supported 00:24:21.703 ANA Change Notices: Supported 00:24:21.703 PLE Aggregate Log Change Notices: Not Supported 00:24:21.703 LBA Status Info Alert Notices: Not Supported 00:24:21.703 EGE Aggregate Log Change Notices: Not Supported 00:24:21.703 Normal NVM Subsystem Shutdown event: Not Supported 00:24:21.703 Zone Descriptor Change Notices: Not Supported 00:24:21.703 Discovery Log Change Notices: Not Supported 00:24:21.703 Controller Attributes 00:24:21.703 128-bit Host Identifier: Supported 00:24:21.703 Non-Operational Permissive Mode: Not Supported 00:24:21.703 NVM Sets: Not Supported 00:24:21.703 Read Recovery Levels: Not Supported 00:24:21.703 Endurance Groups: Not Supported 00:24:21.703 Predictable Latency Mode: Not Supported 00:24:21.703 Traffic Based Keep ALive: Supported 00:24:21.703 Namespace Granularity: Not Supported 00:24:21.703 SQ Associations: Not Supported 00:24:21.703 UUID List: Not Supported 00:24:21.703 Multi-Domain Subsystem: Not Supported 00:24:21.703 Fixed Capacity Management: Not Supported 00:24:21.703 Variable Capacity Management: Not Supported 00:24:21.703 Delete Endurance Group: Not Supported 00:24:21.703 Delete NVM Set: Not Supported 00:24:21.703 Extended LBA Formats Supported: Not Supported 00:24:21.703 Flexible Data Placement Supported: Not Supported 00:24:21.703 00:24:21.703 Controller Memory Buffer Support 00:24:21.703 ================================ 00:24:21.703 Supported: No 00:24:21.703 00:24:21.703 Persistent Memory Region Support 00:24:21.703 ================================ 00:24:21.703 Supported: No 00:24:21.703 00:24:21.703 Admin Command Set Attributes 00:24:21.703 ============================ 00:24:21.703 Security Send/Receive: Not Supported 00:24:21.703 Format NVM: Not Supported 00:24:21.703 Firmware Activate/Download: Not Supported 00:24:21.703 Namespace Management: Not Supported 00:24:21.703 Device Self-Test: Not Supported 00:24:21.703 Directives: Not Supported 00:24:21.703 NVMe-MI: Not Supported 00:24:21.703 Virtualization Management: Not Supported 00:24:21.703 Doorbell Buffer Config: Not Supported 00:24:21.703 Get LBA Status Capability: Not Supported 00:24:21.703 Command & Feature Lockdown Capability: Not Supported 00:24:21.703 Abort Command Limit: 4 00:24:21.703 Async Event Request Limit: 4 00:24:21.703 Number of Firmware Slots: N/A 00:24:21.703 Firmware Slot 1 Read-Only: N/A 00:24:21.703 Firmware Activation Without Reset: N/A 00:24:21.703 Multiple Update Detection Support: N/A 00:24:21.703 Firmware Update Granularity: No Information Provided 00:24:21.703 Per-Namespace SMART Log: Yes 00:24:21.703 Asymmetric Namespace Access Log Page: Supported 00:24:21.703 ANA Transition Time : 10 sec 00:24:21.703 00:24:21.703 Asymmetric Namespace Access Capabilities 00:24:21.703 ANA Optimized State : Supported 00:24:21.703 ANA Non-Optimized State : Supported 00:24:21.703 ANA Inaccessible State : Supported 00:24:21.703 ANA Persistent Loss State : Supported 00:24:21.703 ANA Change State : Supported 00:24:21.703 ANAGRPID is not changed : No 00:24:21.703 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:21.703 00:24:21.703 ANA Group Identifier Maximum : 128 00:24:21.703 Number of ANA Group Identifiers : 128 00:24:21.703 Max Number of Allowed Namespaces : 1024 00:24:21.703 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:21.703 Command Effects Log Page: Supported 00:24:21.703 Get Log Page Extended Data: Supported 00:24:21.703 Telemetry Log Pages: Not Supported 00:24:21.703 Persistent Event Log Pages: Not Supported 00:24:21.703 Supported Log Pages Log Page: May Support 00:24:21.703 Commands Supported & Effects Log Page: Not Supported 00:24:21.703 Feature Identifiers & Effects Log Page:May Support 00:24:21.703 NVMe-MI Commands & Effects Log Page: May Support 00:24:21.703 Data Area 4 for Telemetry Log: Not Supported 00:24:21.703 Error Log Page Entries Supported: 128 00:24:21.703 Keep Alive: Supported 00:24:21.703 Keep Alive Granularity: 1000 ms 00:24:21.703 00:24:21.703 NVM Command Set Attributes 00:24:21.703 ========================== 00:24:21.704 Submission Queue Entry Size 00:24:21.704 Max: 64 00:24:21.704 Min: 64 00:24:21.704 Completion Queue Entry Size 00:24:21.704 Max: 16 00:24:21.704 Min: 16 00:24:21.704 Number of Namespaces: 1024 00:24:21.704 Compare Command: Not Supported 00:24:21.704 Write Uncorrectable Command: Not Supported 00:24:21.704 Dataset Management Command: Supported 00:24:21.704 Write Zeroes Command: Supported 00:24:21.704 Set Features Save Field: Not Supported 00:24:21.704 Reservations: Not Supported 00:24:21.704 Timestamp: Not Supported 00:24:21.704 Copy: Not Supported 00:24:21.704 Volatile Write Cache: Present 00:24:21.704 Atomic Write Unit (Normal): 1 00:24:21.704 Atomic Write Unit (PFail): 1 00:24:21.704 Atomic Compare & Write Unit: 1 00:24:21.704 Fused Compare & Write: Not Supported 00:24:21.704 Scatter-Gather List 00:24:21.704 SGL Command Set: Supported 00:24:21.704 SGL Keyed: Not Supported 00:24:21.704 SGL Bit Bucket Descriptor: Not Supported 00:24:21.704 SGL Metadata Pointer: Not Supported 00:24:21.704 Oversized SGL: Not Supported 00:24:21.704 SGL Metadata Address: Not Supported 00:24:21.704 SGL Offset: Supported 00:24:21.704 Transport SGL Data Block: Not Supported 00:24:21.704 Replay Protected Memory Block: Not Supported 00:24:21.704 00:24:21.704 Firmware Slot Information 00:24:21.704 ========================= 00:24:21.704 Active slot: 0 00:24:21.704 00:24:21.704 Asymmetric Namespace Access 00:24:21.704 =========================== 00:24:21.704 Change Count : 0 00:24:21.704 Number of ANA Group Descriptors : 1 00:24:21.704 ANA Group Descriptor : 0 00:24:21.704 ANA Group ID : 1 00:24:21.704 Number of NSID Values : 1 00:24:21.704 Change Count : 0 00:24:21.704 ANA State : 1 00:24:21.704 Namespace Identifier : 1 00:24:21.704 00:24:21.704 Commands Supported and Effects 00:24:21.704 ============================== 00:24:21.704 Admin Commands 00:24:21.704 -------------- 00:24:21.704 Get Log Page (02h): Supported 00:24:21.704 Identify (06h): Supported 00:24:21.704 Abort (08h): Supported 00:24:21.704 Set Features (09h): Supported 00:24:21.704 Get Features (0Ah): Supported 00:24:21.704 Asynchronous Event Request (0Ch): Supported 00:24:21.704 Keep Alive (18h): Supported 00:24:21.704 I/O Commands 00:24:21.704 ------------ 00:24:21.704 Flush (00h): Supported 00:24:21.704 Write (01h): Supported LBA-Change 00:24:21.704 Read (02h): Supported 00:24:21.704 Write Zeroes (08h): Supported LBA-Change 00:24:21.704 Dataset Management (09h): Supported 00:24:21.704 00:24:21.704 Error Log 00:24:21.704 ========= 00:24:21.704 Entry: 0 00:24:21.704 Error Count: 0x3 00:24:21.704 Submission Queue Id: 0x0 00:24:21.704 Command Id: 0x5 00:24:21.704 Phase Bit: 0 00:24:21.704 Status Code: 0x2 00:24:21.704 Status Code Type: 0x0 00:24:21.704 Do Not Retry: 1 00:24:21.704 Error Location: 0x28 00:24:21.704 LBA: 0x0 00:24:21.704 Namespace: 0x0 00:24:21.704 Vendor Log Page: 0x0 00:24:21.704 ----------- 00:24:21.704 Entry: 1 00:24:21.704 Error Count: 0x2 00:24:21.704 Submission Queue Id: 0x0 00:24:21.704 Command Id: 0x5 00:24:21.704 Phase Bit: 0 00:24:21.704 Status Code: 0x2 00:24:21.704 Status Code Type: 0x0 00:24:21.704 Do Not Retry: 1 00:24:21.704 Error Location: 0x28 00:24:21.704 LBA: 0x0 00:24:21.704 Namespace: 0x0 00:24:21.704 Vendor Log Page: 0x0 00:24:21.704 ----------- 00:24:21.704 Entry: 2 00:24:21.704 Error Count: 0x1 00:24:21.704 Submission Queue Id: 0x0 00:24:21.704 Command Id: 0x4 00:24:21.704 Phase Bit: 0 00:24:21.704 Status Code: 0x2 00:24:21.704 Status Code Type: 0x0 00:24:21.704 Do Not Retry: 1 00:24:21.704 Error Location: 0x28 00:24:21.704 LBA: 0x0 00:24:21.704 Namespace: 0x0 00:24:21.704 Vendor Log Page: 0x0 00:24:21.704 00:24:21.704 Number of Queues 00:24:21.704 ================ 00:24:21.704 Number of I/O Submission Queues: 128 00:24:21.704 Number of I/O Completion Queues: 128 00:24:21.704 00:24:21.704 ZNS Specific Controller Data 00:24:21.704 ============================ 00:24:21.704 Zone Append Size Limit: 0 00:24:21.704 00:24:21.704 00:24:21.704 Active Namespaces 00:24:21.704 ================= 00:24:21.704 get_feature(0x05) failed 00:24:21.704 Namespace ID:1 00:24:21.704 Command Set Identifier: NVM (00h) 00:24:21.704 Deallocate: Supported 00:24:21.704 Deallocated/Unwritten Error: Not Supported 00:24:21.704 Deallocated Read Value: Unknown 00:24:21.704 Deallocate in Write Zeroes: Not Supported 00:24:21.704 Deallocated Guard Field: 0xFFFF 00:24:21.704 Flush: Supported 00:24:21.704 Reservation: Not Supported 00:24:21.704 Namespace Sharing Capabilities: Multiple Controllers 00:24:21.704 Size (in LBAs): 1953525168 (931GiB) 00:24:21.704 Capacity (in LBAs): 1953525168 (931GiB) 00:24:21.704 Utilization (in LBAs): 1953525168 (931GiB) 00:24:21.704 UUID: 2d3ae0dc-2a67-4328-a807-f7033dffc120 00:24:21.704 Thin Provisioning: Not Supported 00:24:21.704 Per-NS Atomic Units: Yes 00:24:21.704 Atomic Boundary Size (Normal): 0 00:24:21.704 Atomic Boundary Size (PFail): 0 00:24:21.704 Atomic Boundary Offset: 0 00:24:21.704 NGUID/EUI64 Never Reused: No 00:24:21.704 ANA group ID: 1 00:24:21.704 Namespace Write Protected: No 00:24:21.704 Number of LBA Formats: 1 00:24:21.704 Current LBA Format: LBA Format #00 00:24:21.704 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:21.704 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:21.704 rmmod nvme_tcp 00:24:21.704 rmmod nvme_fabrics 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.704 16:01:18 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.608 16:01:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:23.608 16:01:20 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:23.608 16:01:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:23.608 16:01:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:24:23.608 16:01:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:23.608 16:01:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:23.608 16:01:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:23.608 16:01:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:23.608 16:01:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:23.608 16:01:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:23.608 16:01:20 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:24.984 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:24.984 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:24.984 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:24.984 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:24.984 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:24.984 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:24.984 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:24.984 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:24.984 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:24.984 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:24.984 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:24.984 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:24.984 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:24.984 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:24.984 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:24.984 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:25.918 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:24:26.177 00:24:26.177 real 0m9.910s 00:24:26.177 user 0m2.154s 00:24:26.177 sys 0m3.604s 00:24:26.177 16:01:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:26.177 16:01:23 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.177 ************************************ 00:24:26.177 END TEST nvmf_identify_kernel_target 00:24:26.177 ************************************ 00:24:26.177 16:01:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:26.177 16:01:23 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:26.177 16:01:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:26.177 16:01:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:26.177 16:01:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:26.177 ************************************ 00:24:26.177 START TEST nvmf_auth_host 00:24:26.177 ************************************ 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:26.177 * Looking for test storage... 00:24:26.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:26.177 16:01:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:28.709 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:28.709 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:28.709 Found net devices under 0000:84:00.0: cvl_0_0 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.709 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:28.710 Found net devices under 0000:84:00.1: cvl_0_1 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:28.710 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.710 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:24:28.710 00:24:28.710 --- 10.0.0.2 ping statistics --- 00:24:28.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.710 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.710 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.710 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:24:28.710 00:24:28.710 --- 10.0.0.1 ping statistics --- 00:24:28.710 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.710 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=844280 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 844280 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 844280 ']' 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:28.710 16:01:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=02c546780a42ef1a2699f4a11f9767fa 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.2Oy 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 02c546780a42ef1a2699f4a11f9767fa 0 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 02c546780a42ef1a2699f4a11f9767fa 0 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=02c546780a42ef1a2699f4a11f9767fa 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.2Oy 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.2Oy 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.2Oy 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ed0fa9283f5790d51779b87db19596da14d3749a47fdcd2f37df10b4ea4c343c 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2RL 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ed0fa9283f5790d51779b87db19596da14d3749a47fdcd2f37df10b4ea4c343c 3 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ed0fa9283f5790d51779b87db19596da14d3749a47fdcd2f37df10b4ea4c343c 3 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ed0fa9283f5790d51779b87db19596da14d3749a47fdcd2f37df10b4ea4c343c 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2RL 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2RL 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.2RL 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=263980540fbcf398c2feb11bcd1de172389a22cb683d15ad 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.CTs 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 263980540fbcf398c2feb11bcd1de172389a22cb683d15ad 0 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 263980540fbcf398c2feb11bcd1de172389a22cb683d15ad 0 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=263980540fbcf398c2feb11bcd1de172389a22cb683d15ad 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:28.969 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.CTs 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.CTs 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.CTs 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a382979dd2e876558eedec3412faea8e66bf2e38c27d7fab 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uz3 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a382979dd2e876558eedec3412faea8e66bf2e38c27d7fab 2 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a382979dd2e876558eedec3412faea8e66bf2e38c27d7fab 2 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a382979dd2e876558eedec3412faea8e66bf2e38c27d7fab 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uz3 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uz3 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.uz3 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6d16e7fce604f48e176e1bfff837958f 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MER 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6d16e7fce604f48e176e1bfff837958f 1 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6d16e7fce604f48e176e1bfff837958f 1 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6d16e7fce604f48e176e1bfff837958f 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MER 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MER 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.MER 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3566e346ef32b98aa0b9484a0ef4f1d6 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.DXT 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3566e346ef32b98aa0b9484a0ef4f1d6 1 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3566e346ef32b98aa0b9484a0ef4f1d6 1 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3566e346ef32b98aa0b9484a0ef4f1d6 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.DXT 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.DXT 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.DXT 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=929666f88c9f8f622d40fa4a648963bed583a6218df3cae2 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FZG 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 929666f88c9f8f622d40fa4a648963bed583a6218df3cae2 2 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 929666f88c9f8f622d40fa4a648963bed583a6218df3cae2 2 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=929666f88c9f8f622d40fa4a648963bed583a6218df3cae2 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FZG 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FZG 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.FZG 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:24:29.228 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=88ce4178a4ffcd50586cec52f50638ef 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.GJ4 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 88ce4178a4ffcd50586cec52f50638ef 0 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 88ce4178a4ffcd50586cec52f50638ef 0 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=88ce4178a4ffcd50586cec52f50638ef 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.GJ4 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.GJ4 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.GJ4 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d99ff8eb325b934eca41eae56e2ee491edd9849607095547ca94f47ba92670ee 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.HzC 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d99ff8eb325b934eca41eae56e2ee491edd9849607095547ca94f47ba92670ee 3 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d99ff8eb325b934eca41eae56e2ee491edd9849607095547ca94f47ba92670ee 3 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d99ff8eb325b934eca41eae56e2ee491edd9849607095547ca94f47ba92670ee 00:24:29.486 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:24:29.487 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:24:29.487 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.HzC 00:24:29.487 16:01:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.HzC 00:24:29.487 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.HzC 00:24:29.487 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:29.487 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 844280 00:24:29.487 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 844280 ']' 00:24:29.487 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.487 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.487 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.487 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.487 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.745 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.745 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:24:29.745 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:29.745 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2Oy 00:24:29.745 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.2RL ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2RL 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.CTs 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.uz3 ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uz3 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.MER 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.DXT ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.DXT 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.FZG 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.GJ4 ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.GJ4 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.HzC 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:29.746 16:01:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:29.746 16:01:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:31.118 Waiting for block devices as requested 00:24:31.118 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:24:31.118 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:31.118 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:31.376 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:31.376 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:31.376 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:31.376 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:31.376 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:31.633 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:31.633 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:31.633 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:31.891 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:31.891 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:31.891 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:31.891 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:32.149 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:32.149 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:32.407 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:24:32.407 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:32.407 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:24:32.407 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:24:32.407 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:32.407 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:24:32.407 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:24:32.407 16:01:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:32.407 16:01:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:32.665 No valid GPT data, bailing 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:24:32.665 00:24:32.665 Discovery Log Number of Records 2, Generation counter 2 00:24:32.665 =====Discovery Log Entry 0====== 00:24:32.665 trtype: tcp 00:24:32.665 adrfam: ipv4 00:24:32.665 subtype: current discovery subsystem 00:24:32.665 treq: not specified, sq flow control disable supported 00:24:32.665 portid: 1 00:24:32.665 trsvcid: 4420 00:24:32.665 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:32.665 traddr: 10.0.0.1 00:24:32.665 eflags: none 00:24:32.665 sectype: none 00:24:32.665 =====Discovery Log Entry 1====== 00:24:32.665 trtype: tcp 00:24:32.665 adrfam: ipv4 00:24:32.665 subtype: nvme subsystem 00:24:32.665 treq: not specified, sq flow control disable supported 00:24:32.665 portid: 1 00:24:32.665 trsvcid: 4420 00:24:32.665 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:32.665 traddr: 10.0.0.1 00:24:32.665 eflags: none 00:24:32.665 sectype: none 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.665 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.666 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.946 nvme0n1 00:24:32.946 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.946 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.946 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.946 16:01:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.946 16:01:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.946 nvme0n1 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.946 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.245 nvme0n1 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.245 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.246 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.508 nvme0n1 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.509 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.767 nvme0n1 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.767 16:01:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.024 nvme0n1 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:34.024 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.025 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.282 nvme0n1 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:34.282 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.283 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.541 nvme0n1 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.541 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.798 nvme0n1 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.798 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:34.799 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.799 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:34.799 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:34.799 16:01:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:34.799 16:01:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:34.799 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.799 16:01:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.056 nvme0n1 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.056 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.313 nvme0n1 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.313 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.314 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.576 nvme0n1 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.576 16:01:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.142 nvme0n1 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.142 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.400 nvme0n1 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.400 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.666 nvme0n1 00:24:36.666 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.666 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.666 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.666 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.666 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.666 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.924 16:01:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.183 nvme0n1 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.183 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.748 nvme0n1 00:24:37.748 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.748 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.748 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.748 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.748 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.748 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.748 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.748 16:01:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.748 16:01:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:37.748 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.313 nvme0n1 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.313 16:01:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.878 nvme0n1 00:24:38.878 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.878 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.878 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.878 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.878 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.878 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.878 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.878 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.878 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.878 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.135 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.700 nvme0n1 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.700 16:01:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.265 nvme0n1 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.265 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.266 16:01:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.198 nvme0n1 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.198 16:01:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.130 nvme0n1 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.130 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.131 16:01:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.063 nvme0n1 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.063 16:01:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.435 nvme0n1 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.435 16:01:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.999 nvme0n1 00:24:44.999 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.999 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.999 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.999 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.999 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.257 nvme0n1 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.257 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.515 nvme0n1 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.515 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.773 nvme0n1 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.773 16:01:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:45.773 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.774 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.031 nvme0n1 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.031 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.032 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.289 nvme0n1 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:46.289 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.290 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.547 nvme0n1 00:24:46.547 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.547 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.547 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.547 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.547 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.547 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.548 16:01:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.806 nvme0n1 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.806 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.063 nvme0n1 00:24:47.063 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.064 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.322 nvme0n1 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.322 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.580 nvme0n1 00:24:47.580 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.580 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.581 16:01:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.838 nvme0n1 00:24:47.838 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.838 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.838 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.838 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.838 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.838 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.096 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.354 nvme0n1 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.354 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.612 nvme0n1 00:24:48.612 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.612 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.612 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.612 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.612 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.612 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.612 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.612 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.612 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.612 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.869 16:01:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.127 nvme0n1 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.127 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.385 nvme0n1 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.385 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.386 16:01:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.951 nvme0n1 00:24:49.951 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.951 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.951 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.951 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.951 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.209 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.774 nvme0n1 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.774 16:01:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.339 nvme0n1 00:24:51.339 16:01:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.339 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.339 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.339 16:01:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.339 16:01:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.339 16:01:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.339 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.340 16:01:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 nvme0n1 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.272 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.836 nvme0n1 00:24:52.836 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.836 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.836 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.836 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.836 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.836 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.836 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.836 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.836 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.836 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.837 16:01:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.773 nvme0n1 00:24:53.773 16:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.773 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.774 16:01:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.746 nvme0n1 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.746 16:01:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.680 nvme0n1 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:55.680 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.681 16:01:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.611 nvme0n1 00:24:56.611 16:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.612 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.612 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.612 16:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.612 16:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.612 16:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.612 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.612 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.612 16:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.612 16:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:56.868 16:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.869 16:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:56.869 16:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:56.869 16:01:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:56.869 16:01:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:56.869 16:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.869 16:01:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.800 nvme0n1 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.800 16:01:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.058 nvme0n1 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.058 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.316 nvme0n1 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.316 nvme0n1 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.316 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.574 nvme0n1 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.574 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.831 16:01:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.831 nvme0n1 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:58.831 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.832 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.089 nvme0n1 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.089 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.346 nvme0n1 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.346 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.347 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.603 nvme0n1 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:59.603 16:01:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:59.604 16:01:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:59.604 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.604 16:01:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.860 nvme0n1 00:24:59.860 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.860 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.860 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.860 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.860 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.860 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.860 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.860 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.860 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.860 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.118 nvme0n1 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.118 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:25:00.375 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.376 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.634 nvme0n1 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.634 16:01:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.198 nvme0n1 00:25:01.198 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.198 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.198 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.198 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.198 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.199 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.457 nvme0n1 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.457 16:01:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.023 nvme0n1 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.023 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.280 nvme0n1 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.280 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.845 nvme0n1 00:25:02.845 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.845 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.845 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.845 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.845 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.845 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.845 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.845 16:01:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.845 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.845 16:01:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.845 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.597 nvme0n1 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.597 16:02:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.160 nvme0n1 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:04.160 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.161 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.724 nvme0n1 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:04.724 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.725 16:02:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.289 nvme0n1 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDJjNTQ2NzgwYTQyZWYxYTI2OTlmNGExMWY5NzY3ZmFVxXO0: 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: ]] 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWQwZmE5MjgzZjU3OTBkNTE3NzliODdkYjE5NTk2ZGExNGQzNzQ5YTQ3ZmRjZDJmMzdkZjEwYjRlYTRjMzQzY/DLxx4=: 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.289 16:02:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.221 nvme0n1 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.221 16:02:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.154 nvme0n1 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NmQxNmU3ZmNlNjA0ZjQ4ZTE3NmUxYmZmZjgzNzk1OGaT4VKX: 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: ]] 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzU2NmUzNDZlZjMyYjk4YWEwYjk0ODRhMGVmNGYxZDYwukot: 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.154 16:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.155 16:02:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.155 16:02:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.155 16:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.155 16:02:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.087 nvme0n1 00:25:08.087 16:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.087 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.087 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:08.087 16:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.087 16:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.088 16:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTI5NjY2Zjg4YzlmOGY2MjJkNDBmYTRhNjQ4OTYzYmVkNTgzYTYyMThkZjNjYWUyGKv+vw==: 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: ]] 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODhjZTQxNzhhNGZmY2Q1MDU4NmNlYzUyZjUwNjM4ZWZAIn2K: 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:08.345 16:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:08.346 16:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:08.346 16:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:08.346 16:02:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:08.346 16:02:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:08.346 16:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.346 16:02:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.290 nvme0n1 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZDk5ZmY4ZWIzMjViOTM0ZWNhNDFlYWU1NmUyZWU0OTFlZGQ5ODQ5NjA3MDk1NTQ3Y2E5NGY0N2JhOTI2NzBlZWd+4mc=: 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:09.290 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.291 16:02:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.246 nvme0n1 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjYzOTgwNTQwZmJjZjM5OGMyZmViMTFiY2QxZGUxNzIzODlhMjJjYjY4M2QxNWFkSvvHbA==: 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTM4Mjk3OWRkMmU4NzY1NThlZWRlYzM0MTJmYWVhOGU2NmJmMmUzOGMyN2Q3ZmFih3zD0A==: 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.246 request: 00:25:10.246 { 00:25:10.246 "name": "nvme0", 00:25:10.246 "trtype": "tcp", 00:25:10.246 "traddr": "10.0.0.1", 00:25:10.246 "adrfam": "ipv4", 00:25:10.246 "trsvcid": "4420", 00:25:10.246 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:10.246 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:10.246 "prchk_reftag": false, 00:25:10.246 "prchk_guard": false, 00:25:10.246 "hdgst": false, 00:25:10.246 "ddgst": false, 00:25:10.246 "method": "bdev_nvme_attach_controller", 00:25:10.246 "req_id": 1 00:25:10.246 } 00:25:10.246 Got JSON-RPC error response 00:25:10.246 response: 00:25:10.246 { 00:25:10.246 "code": -5, 00:25:10.246 "message": "Input/output error" 00:25:10.246 } 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.246 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.504 request: 00:25:10.504 { 00:25:10.504 "name": "nvme0", 00:25:10.504 "trtype": "tcp", 00:25:10.504 "traddr": "10.0.0.1", 00:25:10.504 "adrfam": "ipv4", 00:25:10.504 "trsvcid": "4420", 00:25:10.504 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:10.504 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:10.504 "prchk_reftag": false, 00:25:10.504 "prchk_guard": false, 00:25:10.504 "hdgst": false, 00:25:10.504 "ddgst": false, 00:25:10.504 "dhchap_key": "key2", 00:25:10.504 "method": "bdev_nvme_attach_controller", 00:25:10.504 "req_id": 1 00:25:10.504 } 00:25:10.504 Got JSON-RPC error response 00:25:10.504 response: 00:25:10.504 { 00:25:10.504 "code": -5, 00:25:10.504 "message": "Input/output error" 00:25:10.504 } 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.504 request: 00:25:10.504 { 00:25:10.504 "name": "nvme0", 00:25:10.504 "trtype": "tcp", 00:25:10.504 "traddr": "10.0.0.1", 00:25:10.504 "adrfam": "ipv4", 00:25:10.504 "trsvcid": "4420", 00:25:10.504 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:10.504 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:10.504 "prchk_reftag": false, 00:25:10.504 "prchk_guard": false, 00:25:10.504 "hdgst": false, 00:25:10.504 "ddgst": false, 00:25:10.504 "dhchap_key": "key1", 00:25:10.504 "dhchap_ctrlr_key": "ckey2", 00:25:10.504 "method": "bdev_nvme_attach_controller", 00:25:10.504 "req_id": 1 00:25:10.504 } 00:25:10.504 Got JSON-RPC error response 00:25:10.504 response: 00:25:10.504 { 00:25:10.504 "code": -5, 00:25:10.504 "message": "Input/output error" 00:25:10.504 } 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:10.504 rmmod nvme_tcp 00:25:10.504 rmmod nvme_fabrics 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 844280 ']' 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 844280 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 844280 ']' 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 844280 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 844280 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:10.504 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:10.505 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 844280' 00:25:10.505 killing process with pid 844280 00:25:10.505 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 844280 00:25:10.505 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 844280 00:25:10.763 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:10.763 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:10.764 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:10.764 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:10.764 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:10.764 16:02:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.764 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.764 16:02:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:13.293 16:02:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:14.227 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:14.227 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:14.227 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:14.227 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:14.227 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:14.227 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:14.227 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:14.227 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:14.227 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:14.227 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:14.227 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:14.227 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:14.227 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:14.227 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:14.227 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:14.227 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:15.164 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:25:15.422 16:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.2Oy /tmp/spdk.key-null.CTs /tmp/spdk.key-sha256.MER /tmp/spdk.key-sha384.FZG /tmp/spdk.key-sha512.HzC /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:15.422 16:02:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:16.357 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:16.357 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:16.357 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:16.357 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:16.357 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:16.357 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:16.357 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:16.357 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:16.357 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:16.357 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:16.357 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:16.357 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:16.357 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:16.357 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:16.357 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:16.357 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:16.357 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:16.627 00:25:16.627 real 0m50.389s 00:25:16.627 user 0m47.833s 00:25:16.627 sys 0m5.964s 00:25:16.627 16:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:16.627 16:02:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.627 ************************************ 00:25:16.627 END TEST nvmf_auth_host 00:25:16.627 ************************************ 00:25:16.627 16:02:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:16.627 16:02:13 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:25:16.627 16:02:13 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:16.627 16:02:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:16.627 16:02:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.627 16:02:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:16.627 ************************************ 00:25:16.627 START TEST nvmf_digest 00:25:16.627 ************************************ 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:16.627 * Looking for test storage... 00:25:16.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.627 16:02:13 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:25:16.628 16:02:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:19.163 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:19.163 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:19.163 Found net devices under 0000:84:00.0: cvl_0_0 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:19.163 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:19.164 Found net devices under 0000:84:00.1: cvl_0_1 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:19.164 16:02:15 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:19.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:25:19.164 00:25:19.164 --- 10.0.0.2 ping statistics --- 00:25:19.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.164 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:25:19.164 00:25:19.164 --- 10.0.0.1 ping statistics --- 00:25:19.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.164 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:19.164 ************************************ 00:25:19.164 START TEST nvmf_digest_clean 00:25:19.164 ************************************ 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=853897 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 853897 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 853897 ']' 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:19.164 [2024-07-12 16:02:16.138508] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:19.164 [2024-07-12 16:02:16.138597] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.164 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.164 [2024-07-12 16:02:16.206143] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.164 [2024-07-12 16:02:16.311826] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.164 [2024-07-12 16:02:16.311896] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.164 [2024-07-12 16:02:16.311924] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.164 [2024-07-12 16:02:16.311936] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.164 [2024-07-12 16:02:16.311945] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.164 [2024-07-12 16:02:16.311978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.164 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:19.421 null0 00:25:19.421 [2024-07-12 16:02:16.470394] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.421 [2024-07-12 16:02:16.494591] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=853926 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 853926 /var/tmp/bperf.sock 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 853926 ']' 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:19.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:19.422 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:19.422 [2024-07-12 16:02:16.540253] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:19.422 [2024-07-12 16:02:16.540331] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid853926 ] 00:25:19.422 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.422 [2024-07-12 16:02:16.598519] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.422 [2024-07-12 16:02:16.704664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.679 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:19.679 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:19.679 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:19.679 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:19.679 16:02:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:19.937 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.937 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:20.501 nvme0n1 00:25:20.501 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:20.501 16:02:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:20.501 Running I/O for 2 seconds... 00:25:22.398 00:25:22.398 Latency(us) 00:25:22.398 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.398 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:22.398 nvme0n1 : 2.01 20022.18 78.21 0.00 0.00 6386.04 2754.94 16893.72 00:25:22.398 =================================================================================================================== 00:25:22.398 Total : 20022.18 78.21 0.00 0.00 6386.04 2754.94 16893.72 00:25:22.398 0 00:25:22.398 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:22.398 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:22.398 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:22.398 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:22.398 | select(.opcode=="crc32c") 00:25:22.398 | "\(.module_name) \(.executed)"' 00:25:22.398 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 853926 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 853926 ']' 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 853926 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 853926 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 853926' 00:25:22.656 killing process with pid 853926 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 853926 00:25:22.656 Received shutdown signal, test time was about 2.000000 seconds 00:25:22.656 00:25:22.656 Latency(us) 00:25:22.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.656 =================================================================================================================== 00:25:22.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:22.656 16:02:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 853926 00:25:22.913 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:22.913 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:22.913 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=854331 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 854331 /var/tmp/bperf.sock 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 854331 ']' 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:22.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.914 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:23.171 [2024-07-12 16:02:20.227251] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:23.171 [2024-07-12 16:02:20.227359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854331 ] 00:25:23.171 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:23.171 Zero copy mechanism will not be used. 00:25:23.171 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.171 [2024-07-12 16:02:20.286353] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.171 [2024-07-12 16:02:20.396371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.171 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.171 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:23.172 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:23.172 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:23.172 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:23.736 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.736 16:02:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.992 nvme0n1 00:25:23.992 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:23.992 16:02:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:24.250 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:24.250 Zero copy mechanism will not be used. 00:25:24.250 Running I/O for 2 seconds... 00:25:26.147 00:25:26.147 Latency(us) 00:25:26.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.147 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:26.147 nvme0n1 : 2.00 5106.47 638.31 0.00 0.00 3129.46 764.59 4805.97 00:25:26.147 =================================================================================================================== 00:25:26.147 Total : 5106.47 638.31 0.00 0.00 3129.46 764.59 4805.97 00:25:26.147 0 00:25:26.147 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:26.147 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:26.147 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:26.147 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:26.147 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:26.147 | select(.opcode=="crc32c") 00:25:26.147 | "\(.module_name) \(.executed)"' 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 854331 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 854331 ']' 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 854331 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 854331 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 854331' 00:25:26.405 killing process with pid 854331 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 854331 00:25:26.405 Received shutdown signal, test time was about 2.000000 seconds 00:25:26.405 00:25:26.405 Latency(us) 00:25:26.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.405 =================================================================================================================== 00:25:26.405 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:26.405 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 854331 00:25:26.662 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:26.662 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:26.662 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:26.662 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:26.662 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:26.662 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:26.662 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:26.662 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=854854 00:25:26.662 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:26.663 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 854854 /var/tmp/bperf.sock 00:25:26.663 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 854854 ']' 00:25:26.663 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:26.663 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:26.663 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:26.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:26.663 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:26.663 16:02:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:26.663 [2024-07-12 16:02:23.895395] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:26.663 [2024-07-12 16:02:23.895476] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid854854 ] 00:25:26.663 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.921 [2024-07-12 16:02:23.971230] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.921 [2024-07-12 16:02:24.104555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:26.921 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:26.921 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:26.921 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:26.921 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:26.921 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:27.488 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:27.488 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:27.745 nvme0n1 00:25:27.745 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:27.745 16:02:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:28.002 Running I/O for 2 seconds... 00:25:29.900 00:25:29.900 Latency(us) 00:25:29.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.900 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:29.900 nvme0n1 : 2.00 23572.68 92.08 0.00 0.00 5423.12 2718.53 11845.03 00:25:29.900 =================================================================================================================== 00:25:29.900 Total : 23572.68 92.08 0.00 0.00 5423.12 2718.53 11845.03 00:25:29.900 0 00:25:29.901 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:29.901 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:29.901 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:29.901 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:29.901 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:29.901 | select(.opcode=="crc32c") 00:25:29.901 | "\(.module_name) \(.executed)"' 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 854854 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 854854 ']' 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 854854 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 854854 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 854854' 00:25:30.157 killing process with pid 854854 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 854854 00:25:30.157 Received shutdown signal, test time was about 2.000000 seconds 00:25:30.157 00:25:30.157 Latency(us) 00:25:30.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.157 =================================================================================================================== 00:25:30.157 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.157 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 854854 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=855268 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 855268 /var/tmp/bperf.sock 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 855268 ']' 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:30.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:30.415 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:30.672 [2024-07-12 16:02:27.714599] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:30.672 [2024-07-12 16:02:27.714684] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855268 ] 00:25:30.672 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:30.672 Zero copy mechanism will not be used. 00:25:30.672 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.672 [2024-07-12 16:02:27.772561] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.672 [2024-07-12 16:02:27.882519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.672 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.672 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:25:30.672 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:30.672 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:30.672 16:02:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:31.236 16:02:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.236 16:02:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.492 nvme0n1 00:25:31.492 16:02:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:31.492 16:02:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:31.748 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:31.748 Zero copy mechanism will not be used. 00:25:31.748 Running I/O for 2 seconds... 00:25:33.643 00:25:33.643 Latency(us) 00:25:33.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.643 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:33.643 nvme0n1 : 2.00 4909.74 613.72 0.00 0.00 3251.40 2390.85 9563.40 00:25:33.643 =================================================================================================================== 00:25:33.643 Total : 4909.74 613.72 0.00 0.00 3251.40 2390.85 9563.40 00:25:33.643 0 00:25:33.643 16:02:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:33.643 16:02:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:33.643 16:02:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:33.643 16:02:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:33.643 | select(.opcode=="crc32c") 00:25:33.643 | "\(.module_name) \(.executed)"' 00:25:33.643 16:02:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:33.900 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:33.900 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 855268 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 855268 ']' 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 855268 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 855268 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 855268' 00:25:33.901 killing process with pid 855268 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 855268 00:25:33.901 Received shutdown signal, test time was about 2.000000 seconds 00:25:33.901 00:25:33.901 Latency(us) 00:25:33.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.901 =================================================================================================================== 00:25:33.901 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.901 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 855268 00:25:34.158 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 853897 00:25:34.158 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 853897 ']' 00:25:34.158 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 853897 00:25:34.158 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:25:34.158 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:34.158 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 853897 00:25:34.416 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:34.416 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:34.416 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 853897' 00:25:34.416 killing process with pid 853897 00:25:34.416 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 853897 00:25:34.416 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 853897 00:25:34.416 00:25:34.416 real 0m15.605s 00:25:34.416 user 0m30.139s 00:25:34.416 sys 0m5.123s 00:25:34.416 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:34.416 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:34.416 ************************************ 00:25:34.416 END TEST nvmf_digest_clean 00:25:34.416 ************************************ 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:34.674 ************************************ 00:25:34.674 START TEST nvmf_digest_error 00:25:34.674 ************************************ 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=855731 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 855731 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 855731 ']' 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:34.674 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.674 [2024-07-12 16:02:31.795227] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:34.674 [2024-07-12 16:02:31.795329] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.674 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.674 [2024-07-12 16:02:31.859521] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.674 [2024-07-12 16:02:31.965278] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.674 [2024-07-12 16:02:31.965330] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.674 [2024-07-12 16:02:31.965359] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.674 [2024-07-12 16:02:31.965370] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.674 [2024-07-12 16:02:31.965380] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.674 [2024-07-12 16:02:31.965419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.932 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:34.932 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:34.932 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:34.932 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:34.932 16:02:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.932 [2024-07-12 16:02:32.026003] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.932 null0 00:25:34.932 [2024-07-12 16:02:32.138904] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.932 [2024-07-12 16:02:32.163121] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=855844 00:25:34.932 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:34.933 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 855844 /var/tmp/bperf.sock 00:25:34.933 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 855844 ']' 00:25:34.933 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:34.933 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:34.933 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:34.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:34.933 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:34.933 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.933 [2024-07-12 16:02:32.207572] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:34.933 [2024-07-12 16:02:32.207648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855844 ] 00:25:35.190 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.190 [2024-07-12 16:02:32.265563] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.190 [2024-07-12 16:02:32.371052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.190 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:35.190 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:35.190 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:35.190 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:35.447 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:35.447 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.447 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:35.447 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.447 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:35.447 16:02:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.010 nvme0n1 00:25:36.010 16:02:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:36.010 16:02:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.010 16:02:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:36.010 16:02:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.010 16:02:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:36.010 16:02:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:36.268 Running I/O for 2 seconds... 00:25:36.268 [2024-07-12 16:02:33.329538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.329599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.329620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.344083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.344127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.344143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.358283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.358312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.358348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.368546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.368588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.368605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.384015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.384058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.384074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.399988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.400018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.400049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.414212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.414241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.414272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.429502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.429539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.429571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.439590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.439618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.439648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.454530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.454573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.454590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.470013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.470057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.470073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.485372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.485401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.485432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.495751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.495780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.495812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.510127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.510169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.510186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.525357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.525388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.525419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.535682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.535709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.535748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.268 [2024-07-12 16:02:33.548764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.268 [2024-07-12 16:02:33.548807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.268 [2024-07-12 16:02:33.548824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.563888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.563919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.563936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.574475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.574502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.574533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.589378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.589407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.589438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.603609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.603638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.603670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.615298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.615327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.615357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.627915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.627944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.627976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.643206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.643235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.643265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.653732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.653767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.653805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.667134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.667163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.667195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.682536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.682564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.682595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.697358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.697386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.697417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.706786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.706815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.706847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.721378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.721405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.721436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.732959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.732988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.733022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.748437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.748477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.748507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.762748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.762777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.762809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.773923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.773958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.773996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.786683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.786711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.786763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.799866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.526 [2024-07-12 16:02:33.799894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.526 [2024-07-12 16:02:33.799925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.526 [2024-07-12 16:02:33.810362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.527 [2024-07-12 16:02:33.810389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.527 [2024-07-12 16:02:33.810420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.822501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.822532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.822548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.836035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.836078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.836094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.850068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.850095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.850125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.862146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.862173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.862204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.877554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.877581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.877613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.892600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.892643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.892659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.904920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.904948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.904979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.916239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.916266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.916297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.930209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.930236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.930267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.944118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.944146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.944176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.954756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.954785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.954822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.967679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.967706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.967743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.977863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.977892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.977926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:33.991832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:33.991864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:33.991896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:34.003332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:34.003359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:34.003389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:34.017804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:34.017847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:34.017864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:34.032572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:34.032600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:34.032630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:34.041994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:34.042022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:34.042037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:34.055923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.784 [2024-07-12 16:02:34.055951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.784 [2024-07-12 16:02:34.055982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:36.784 [2024-07-12 16:02:34.067490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:36.785 [2024-07-12 16:02:34.067516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.785 [2024-07-12 16:02:34.067546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.079085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.079113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.079143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.089475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.089503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.089533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.100834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.100861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.100892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.113300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.113327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.113356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.125666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.125694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.125725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.138924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.138965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.138981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.152236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.152262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.152292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.162729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.162762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.162793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.178193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.178233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.178249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.188193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.188219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.188249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.202121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.202148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.202183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.216822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.216849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.216881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.230220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.230246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.230276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.240495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.240522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.240553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.254708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.254757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.254774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.264596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.264623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.264654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.277599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.277626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.277656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.291575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.291601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.291632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.302288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.302315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.302346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.312295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.312327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.312358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.042 [2024-07-12 16:02:34.324186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.042 [2024-07-12 16:02:34.324212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.042 [2024-07-12 16:02:34.324243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.335675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.335720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.335744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.349727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.349765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.349797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.360481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.360508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.360538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.375212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.375239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.375270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.390150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.390178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.390208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.399953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.399979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.400010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.414667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.414694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.414725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.429310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.429351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.429367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.443485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.443512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.443543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.458427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.458454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.458485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.473350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.473379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.473409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.483052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.483079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.483109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.497001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.497028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.497043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.509928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.509956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.509987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.521264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.521293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.521324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.534495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.534522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.534557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.300 [2024-07-12 16:02:34.550030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.300 [2024-07-12 16:02:34.550071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.300 [2024-07-12 16:02:34.550087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.301 [2024-07-12 16:02:34.564623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.301 [2024-07-12 16:02:34.564650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.301 [2024-07-12 16:02:34.564681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.301 [2024-07-12 16:02:34.579485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.301 [2024-07-12 16:02:34.579512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.301 [2024-07-12 16:02:34.579542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.301 [2024-07-12 16:02:34.593244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.301 [2024-07-12 16:02:34.593274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.301 [2024-07-12 16:02:34.593306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.558 [2024-07-12 16:02:34.604939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.558 [2024-07-12 16:02:34.604967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.558 [2024-07-12 16:02:34.604998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.558 [2024-07-12 16:02:34.618830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.558 [2024-07-12 16:02:34.618859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.558 [2024-07-12 16:02:34.618890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.558 [2024-07-12 16:02:34.632987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.558 [2024-07-12 16:02:34.633031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.558 [2024-07-12 16:02:34.633047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.558 [2024-07-12 16:02:34.646981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.558 [2024-07-12 16:02:34.647023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.558 [2024-07-12 16:02:34.647040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.558 [2024-07-12 16:02:34.658090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.558 [2024-07-12 16:02:34.658117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.558 [2024-07-12 16:02:34.658148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.558 [2024-07-12 16:02:34.672288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.558 [2024-07-12 16:02:34.672315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.558 [2024-07-12 16:02:34.672346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.558 [2024-07-12 16:02:34.687528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.558 [2024-07-12 16:02:34.687555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.558 [2024-07-12 16:02:34.687585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.558 [2024-07-12 16:02:34.702306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.558 [2024-07-12 16:02:34.702333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.558 [2024-07-12 16:02:34.702363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.558 [2024-07-12 16:02:34.715962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.558 [2024-07-12 16:02:34.715990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.558 [2024-07-12 16:02:34.716020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.559 [2024-07-12 16:02:34.730702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.559 [2024-07-12 16:02:34.730753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.559 [2024-07-12 16:02:34.730770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.559 [2024-07-12 16:02:34.741859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.559 [2024-07-12 16:02:34.741903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.559 [2024-07-12 16:02:34.741920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.559 [2024-07-12 16:02:34.755270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.559 [2024-07-12 16:02:34.755296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.559 [2024-07-12 16:02:34.755328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.559 [2024-07-12 16:02:34.770419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.559 [2024-07-12 16:02:34.770447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.559 [2024-07-12 16:02:34.770483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.559 [2024-07-12 16:02:34.785348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.559 [2024-07-12 16:02:34.785376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.559 [2024-07-12 16:02:34.785406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.559 [2024-07-12 16:02:34.794317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.559 [2024-07-12 16:02:34.794344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.559 [2024-07-12 16:02:34.794374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.559 [2024-07-12 16:02:34.808487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.559 [2024-07-12 16:02:34.808529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.559 [2024-07-12 16:02:34.808546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.559 [2024-07-12 16:02:34.821888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.559 [2024-07-12 16:02:34.821919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.559 [2024-07-12 16:02:34.821950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.559 [2024-07-12 16:02:34.833110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.559 [2024-07-12 16:02:34.833138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.559 [2024-07-12 16:02:34.833168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.559 [2024-07-12 16:02:34.847393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.559 [2024-07-12 16:02:34.847420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.559 [2024-07-12 16:02:34.847451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:34.862863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:34.862892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:34.862924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:34.876424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:34.876452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:34.876482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:34.887599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:34.887630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:34.887661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:34.899635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:34.899662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:34.899692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:34.912123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:34.912151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:34.912181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:34.922386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:34.922426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:34.922442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:34.934748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:34.934777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:34.934814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:34.946699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:34.946748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:34.946766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:34.960017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:34.960045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:34.960061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:34.969779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:34.969806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:34.969837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:34.984172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:34.984201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:34.984232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:34.998594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:34.998622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:34.998654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:35.008130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:35.008158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:35.008188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:35.021645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:35.021674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:35.021704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:35.033234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:35.033261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:2060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:35.033293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:35.043969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:35.043997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:35.044029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:35.058973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:35.059002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:35.059034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:35.072346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:35.072373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:35.072405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:35.083365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:35.083392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:35.083422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:35.094606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:35.094635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:35.094675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:37.817 [2024-07-12 16:02:35.106410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:37.817 [2024-07-12 16:02:35.106441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.817 [2024-07-12 16:02:35.106458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.118089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.118118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.118150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.131326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.131355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.131385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.143071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.143114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.143130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.157276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.157304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.157335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.167806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.167834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.167865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.181368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.181397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.181427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.193608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.193637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.193667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.204170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.204198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.204229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.215482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.215510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.215541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.226829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.226857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.226888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.238626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.238654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.238683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.250769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.250796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.250826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.263271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.263298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.263328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.274793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.274835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.274852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.285599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.285626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.285657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.298303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.298345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.298367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.308812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.308840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.308872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 [2024-07-12 16:02:35.320592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221b380) 00:25:38.075 [2024-07-12 16:02:35.320620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:38.075 [2024-07-12 16:02:35.320649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:38.075 00:25:38.075 Latency(us) 00:25:38.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.075 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:38.075 nvme0n1 : 2.05 19445.17 75.96 0.00 0.00 6443.42 3094.76 46603.38 00:25:38.075 =================================================================================================================== 00:25:38.075 Total : 19445.17 75.96 0.00 0.00 6443.42 3094.76 46603.38 00:25:38.075 0 00:25:38.333 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:38.333 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:38.333 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:38.333 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:38.333 | .driver_specific 00:25:38.333 | .nvme_error 00:25:38.333 | .status_code 00:25:38.333 | .command_transient_transport_error' 00:25:38.590 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:25:38.590 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 855844 00:25:38.590 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 855844 ']' 00:25:38.590 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 855844 00:25:38.590 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:38.590 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:38.590 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 855844 00:25:38.590 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:38.590 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:38.590 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 855844' 00:25:38.590 killing process with pid 855844 00:25:38.590 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 855844 00:25:38.590 Received shutdown signal, test time was about 2.000000 seconds 00:25:38.590 00:25:38.590 Latency(us) 00:25:38.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.590 =================================================================================================================== 00:25:38.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:38.590 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 855844 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=856256 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 856256 /var/tmp/bperf.sock 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 856256 ']' 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:38.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:38.848 16:02:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:38.848 [2024-07-12 16:02:35.972095] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:38.848 [2024-07-12 16:02:35.972172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856256 ] 00:25:38.848 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:38.848 Zero copy mechanism will not be used. 00:25:38.848 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.848 [2024-07-12 16:02:36.032583] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.848 [2024-07-12 16:02:36.139565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.105 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:39.105 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:39.105 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:39.105 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:39.363 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:39.363 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.363 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:39.363 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.363 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.363 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:39.959 nvme0n1 00:25:39.959 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:39.959 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.959 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:39.959 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.959 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:39.959 16:02:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:39.959 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:39.959 Zero copy mechanism will not be used. 00:25:39.959 Running I/O for 2 seconds... 00:25:39.959 [2024-07-12 16:02:37.090187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.959 [2024-07-12 16:02:37.090256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.959 [2024-07-12 16:02:37.090276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.959 [2024-07-12 16:02:37.095930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.095963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.095981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.102208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.102238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.102270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.108931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.108961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.108993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.115428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.115457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.115488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.122071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.122100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.122132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.129065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.129095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.129126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.135969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.136006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.136024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.142975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.143005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.143035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.150945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.150989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.151007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.158571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.158615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.158633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.165617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.165660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.165678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.172373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.172402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.172434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.179144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.179174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.179205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.185816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.185847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.185878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.191951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.191981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.192013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.198899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.198929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.198960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.206369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.206399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.206430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.212956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.212999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.213016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.219516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.219558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.219575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.226430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.226470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.226501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.233095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.233124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.233156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.239981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.240015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.240047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:39.960 [2024-07-12 16:02:37.247141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:39.960 [2024-07-12 16:02:37.247169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:39.960 [2024-07-12 16:02:37.247203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.255128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.255160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.255195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.262371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.262400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.262432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.269338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.269366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.269397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.276153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.276181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.276212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.283122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.283174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.283191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.290301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.290328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.290359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.296992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.297035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.297052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.303814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.303841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.303873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.310872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.310900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.310932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.317815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.317843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.317874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.325199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.325228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.325259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.332411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.332439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.332471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.339567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.339595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.339626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.346835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.346863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.346895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.353976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.354006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.354038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.361086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.361113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.361155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.367967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.367997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.368014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.374522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.374551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.374589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.381273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.381301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.381332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.387852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.387881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.387914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.393662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.393690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.393723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.399287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.399315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.399346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.405015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.405058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.405075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.411530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.411559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.411590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.417929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.417958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.417989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.423235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.423263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.423293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.428853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.428886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.220 [2024-07-12 16:02:37.428918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.220 [2024-07-12 16:02:37.434624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.220 [2024-07-12 16:02:37.434665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.434682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.221 [2024-07-12 16:02:37.440774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.221 [2024-07-12 16:02:37.440815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.440846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.221 [2024-07-12 16:02:37.446343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.221 [2024-07-12 16:02:37.446372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.446403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.221 [2024-07-12 16:02:37.452039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.221 [2024-07-12 16:02:37.452067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.452097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.221 [2024-07-12 16:02:37.457734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.221 [2024-07-12 16:02:37.457769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.457800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.221 [2024-07-12 16:02:37.463659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.221 [2024-07-12 16:02:37.463687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.463717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.221 [2024-07-12 16:02:37.469955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.221 [2024-07-12 16:02:37.470000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.470018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.221 [2024-07-12 16:02:37.476409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.221 [2024-07-12 16:02:37.476437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.476469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.221 [2024-07-12 16:02:37.483894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.221 [2024-07-12 16:02:37.483923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.483955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.221 [2024-07-12 16:02:37.491764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.221 [2024-07-12 16:02:37.491794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.491825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.221 [2024-07-12 16:02:37.498476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.221 [2024-07-12 16:02:37.498519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.498535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.221 [2024-07-12 16:02:37.504158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.221 [2024-07-12 16:02:37.504185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.504216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.221 [2024-07-12 16:02:37.509843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.221 [2024-07-12 16:02:37.509874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.221 [2024-07-12 16:02:37.509891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.480 [2024-07-12 16:02:37.515611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.480 [2024-07-12 16:02:37.515653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.480 [2024-07-12 16:02:37.515669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.480 [2024-07-12 16:02:37.521406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.480 [2024-07-12 16:02:37.521434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.480 [2024-07-12 16:02:37.521465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.480 [2024-07-12 16:02:37.527365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.480 [2024-07-12 16:02:37.527392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.480 [2024-07-12 16:02:37.527423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.480 [2024-07-12 16:02:37.532833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.480 [2024-07-12 16:02:37.532874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.480 [2024-07-12 16:02:37.532898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.480 [2024-07-12 16:02:37.538912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.480 [2024-07-12 16:02:37.538941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.480 [2024-07-12 16:02:37.538972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.480 [2024-07-12 16:02:37.544730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.480 [2024-07-12 16:02:37.544780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.480 [2024-07-12 16:02:37.544796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.480 [2024-07-12 16:02:37.551237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.480 [2024-07-12 16:02:37.551265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.480 [2024-07-12 16:02:37.551295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.558969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.558997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.559028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.566833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.566861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.566893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.574006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.574064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.574080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.580377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.580404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.580435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.586425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.586452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.586483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.592619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.592651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.592683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.598923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.598951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.598981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.605532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.605558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.605589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.609865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.609893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.609926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.614906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.614950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.614967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.621492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.621519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.621549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.627910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.627936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.627966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.634604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.634631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.634663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.641590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.641617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.641648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.647893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.647923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.647957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.653575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.653603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.653634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.659920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.659949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.659982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.666286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.666312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.666343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.672601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.672628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.672660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.678969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.678997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.679028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.686092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.686120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.686151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.693311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.693339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.693369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.700594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.700627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.700664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.707438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.707480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.707497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.714799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.714828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.714860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.722263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.722296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.722327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.729539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.729567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.729598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.733675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.733702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.481 [2024-07-12 16:02:37.733732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.481 [2024-07-12 16:02:37.739971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.481 [2024-07-12 16:02:37.740015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.482 [2024-07-12 16:02:37.740033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.482 [2024-07-12 16:02:37.747238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.482 [2024-07-12 16:02:37.747267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.482 [2024-07-12 16:02:37.747304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.482 [2024-07-12 16:02:37.754127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.482 [2024-07-12 16:02:37.754156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.482 [2024-07-12 16:02:37.754187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.482 [2024-07-12 16:02:37.760925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.482 [2024-07-12 16:02:37.760959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.482 [2024-07-12 16:02:37.760991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.482 [2024-07-12 16:02:37.767305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.482 [2024-07-12 16:02:37.767333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.482 [2024-07-12 16:02:37.767363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.773633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.773663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.773696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.780824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.780864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.780896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.788647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.788674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.788705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.795973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.796002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.796034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.802915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.802944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.802975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.810353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.810380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.810410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.814450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.814476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.814506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.821679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.821707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.821744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.830235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.830276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.830293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.838257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.838285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.838315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.845749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.845777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.845808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.854514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.854541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.854573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.863885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.863915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.863947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.872147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.740 [2024-07-12 16:02:37.872175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.740 [2024-07-12 16:02:37.872206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.740 [2024-07-12 16:02:37.880306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.880334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.880366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.889094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.889122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.889158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.897486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.897513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.897543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.905862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.905891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.905923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.913291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.913317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.913347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.920539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.920564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.920594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.927863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.927890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.927920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.935158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.935184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.935214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.943426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.943452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.943482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.951965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.952008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.952026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.960046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.960073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.960104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.967837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.967865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.967896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.975657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.975684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.975724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.982690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.982716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.982756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.990576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.990603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.990634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:37.998269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:37.998296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:37.998327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:38.006009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:38.006052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:38.006067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:38.013429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:38.013481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:38.013497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:38.020472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:38.020498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:38.020534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:40.741 [2024-07-12 16:02:38.027694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:40.741 [2024-07-12 16:02:38.027735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.741 [2024-07-12 16:02:38.027760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.000 [2024-07-12 16:02:38.035249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.000 [2024-07-12 16:02:38.035275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.000 [2024-07-12 16:02:38.035306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.000 [2024-07-12 16:02:38.042478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.000 [2024-07-12 16:02:38.042504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.000 [2024-07-12 16:02:38.042535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.000 [2024-07-12 16:02:38.050105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.000 [2024-07-12 16:02:38.050146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.000 [2024-07-12 16:02:38.050162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.000 [2024-07-12 16:02:38.057529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.000 [2024-07-12 16:02:38.057555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.000 [2024-07-12 16:02:38.057585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.000 [2024-07-12 16:02:38.064625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.000 [2024-07-12 16:02:38.064652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.000 [2024-07-12 16:02:38.064682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.000 [2024-07-12 16:02:38.071687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.000 [2024-07-12 16:02:38.071727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.000 [2024-07-12 16:02:38.071751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.078620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.078646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.078676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.085597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.085628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.085659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.092040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.092081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.092096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.099166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.099193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.099223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.106113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.106140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.106169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.113422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.113448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.113478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.120529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.120555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.120585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.128231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.128259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.128289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.135566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.135594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.135625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.142788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.142815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.142845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.149804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.149847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.149872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.156665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.156691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.156722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.163268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.163294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.163323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.169931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.169958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.169988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.176616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.176642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.176671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.183395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.183421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.183451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.190398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.190425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.190454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.197588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.197617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.197646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.204407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.204433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.204468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.210988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.211015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.211030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.217619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.217644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.217675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.224167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.224194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.224224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.231013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.231056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.231071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.238375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.238418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.238435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.245516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.245544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.245575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.253331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.253359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.253390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.261251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.261294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.261310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.268225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.268258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.268288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.001 [2024-07-12 16:02:38.275314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.001 [2024-07-12 16:02:38.275342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.001 [2024-07-12 16:02:38.275373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.002 [2024-07-12 16:02:38.282986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.002 [2024-07-12 16:02:38.283028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.002 [2024-07-12 16:02:38.283044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.002 [2024-07-12 16:02:38.291589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.002 [2024-07-12 16:02:38.291618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.002 [2024-07-12 16:02:38.291650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.300217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.300259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.300275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.308070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.308098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.308128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.315216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.315244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.315274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.322629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.322671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.322696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.329266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.329294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.329324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.335495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.335522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.335553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.341783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.341811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.341841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.347646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.347673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.347704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.354024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.354066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.354082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.357901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.357929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.357960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.363521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.363548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.363579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.370171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.370199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.370229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.376556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.376583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.376613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.383673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.383701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.383744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.390288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.390315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.390345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.396518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.396545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.396575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.402893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.402921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.402951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.409345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.409372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.409403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.416252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.416280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.416310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.423382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.423410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.423439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.429948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.429976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.430012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.436205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.436231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.436261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.442121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.261 [2024-07-12 16:02:38.442153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.261 [2024-07-12 16:02:38.442184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.261 [2024-07-12 16:02:38.448442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.448468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.448499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.454912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.454954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.454971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.461476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.461502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.461532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.467632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.467659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.467688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.473242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.473268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.473298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.479509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.479549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.479565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.486029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.486069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.486084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.492624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.492650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.492679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.499307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.499333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.499362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.505983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.506010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.506044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.512788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.512815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.512846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.519497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.519523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.519553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.525501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.525528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.525558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.531074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.531100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.531129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.537235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.537263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.537294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.542937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.542978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.542996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.262 [2024-07-12 16:02:38.548610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.262 [2024-07-12 16:02:38.548637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.262 [2024-07-12 16:02:38.548671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.554599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.554627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.554644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.560435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.560465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.560496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.566249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.566275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.566305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.571870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.571898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.571929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.577400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.577426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.577455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.583343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.583369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.583400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.590010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.590038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.590053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.597279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.597306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.597336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.604603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.604649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.604667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.612539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.612567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.612598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.620535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.620563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.620593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.628000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.628043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.628059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.635544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.635572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.635602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.643253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.643281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.643312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.650830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.650859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.650891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.658321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.658349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.658380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.666499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.666529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.666567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.674515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.674542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.674573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.682474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.682501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.682532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.690565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.690592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.690623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.699068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.521 [2024-07-12 16:02:38.699111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.521 [2024-07-12 16:02:38.699126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.521 [2024-07-12 16:02:38.707341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.707369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.707400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.716509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.716536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.716567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.723180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.723209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.723240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.730852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.730894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.730913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.738603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.738655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.738672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.743304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.743331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.743360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.750165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.750192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.750223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.758347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.758385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.758416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.766656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.766684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.766715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.774414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.774442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.774474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.781139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.781170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.781203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.787832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.787876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.787894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.794547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.794575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.794606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.801065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.801115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.801131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.522 [2024-07-12 16:02:38.807557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.522 [2024-07-12 16:02:38.807584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.522 [2024-07-12 16:02:38.807614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.814590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.814639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.814656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.821389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.821417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.821448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.829144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.829172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.829203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.836033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.836074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.836090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.843153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.843181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.843212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.850025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.850069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.850086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.857110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.857138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.857177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.864234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.864261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.864292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.871687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.871731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.871757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.879275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.879303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.879334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.886637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.886665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.886695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.893820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.893849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.893881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.901032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.901060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.901076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.908350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.908377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.908408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.916360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.916388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.916419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.923310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.923346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.923379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.930533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.930560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.930592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.937542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.937569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.937600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.941452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.941478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.941509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.948023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.948067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.948082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.955179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.955207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.955241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.962298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.962326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.962356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.969557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.969585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.969615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.781 [2024-07-12 16:02:38.977258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.781 [2024-07-12 16:02:38.977286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.781 [2024-07-12 16:02:38.977316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.782 [2024-07-12 16:02:38.984439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.782 [2024-07-12 16:02:38.984466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.782 [2024-07-12 16:02:38.984498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.782 [2024-07-12 16:02:38.991617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.782 [2024-07-12 16:02:38.991643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.782 [2024-07-12 16:02:38.991675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.782 [2024-07-12 16:02:38.998426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.782 [2024-07-12 16:02:38.998454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.782 [2024-07-12 16:02:38.998484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.782 [2024-07-12 16:02:39.005703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.782 [2024-07-12 16:02:39.005754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.782 [2024-07-12 16:02:39.005772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.782 [2024-07-12 16:02:39.013164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.782 [2024-07-12 16:02:39.013192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.782 [2024-07-12 16:02:39.013223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.782 [2024-07-12 16:02:39.020475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.782 [2024-07-12 16:02:39.020502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.782 [2024-07-12 16:02:39.020533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.782 [2024-07-12 16:02:39.028218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.782 [2024-07-12 16:02:39.028246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.782 [2024-07-12 16:02:39.028277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.782 [2024-07-12 16:02:39.036883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.782 [2024-07-12 16:02:39.036912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.782 [2024-07-12 16:02:39.036944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:41.782 [2024-07-12 16:02:39.045691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.782 [2024-07-12 16:02:39.045735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.782 [2024-07-12 16:02:39.045766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:41.782 [2024-07-12 16:02:39.054196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.782 [2024-07-12 16:02:39.054225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.782 [2024-07-12 16:02:39.054256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:41.782 [2024-07-12 16:02:39.062506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.782 [2024-07-12 16:02:39.062535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.782 [2024-07-12 16:02:39.062568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:41.782 [2024-07-12 16:02:39.071675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:41.782 [2024-07-12 16:02:39.071706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:41.782 [2024-07-12 16:02:39.071748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:42.040 [2024-07-12 16:02:39.080436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:42.040 [2024-07-12 16:02:39.080465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-12 16:02:39.080495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:42.040 [2024-07-12 16:02:39.085031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6cfd10) 00:25:42.040 [2024-07-12 16:02:39.085073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.040 [2024-07-12 16:02:39.085089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:42.040 00:25:42.040 Latency(us) 00:25:42.040 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.040 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:42.040 nvme0n1 : 2.04 4386.18 548.27 0.00 0.00 3575.52 788.86 44467.39 00:25:42.040 =================================================================================================================== 00:25:42.040 Total : 4386.18 548.27 0.00 0.00 3575.52 788.86 44467.39 00:25:42.040 0 00:25:42.040 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:42.040 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:42.040 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:42.040 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:42.040 | .driver_specific 00:25:42.040 | .nvme_error 00:25:42.040 | .status_code 00:25:42.040 | .command_transient_transport_error' 00:25:42.298 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 289 > 0 )) 00:25:42.298 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 856256 00:25:42.298 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 856256 ']' 00:25:42.298 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 856256 00:25:42.298 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:42.298 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:42.298 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 856256 00:25:42.298 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:42.298 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:42.298 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 856256' 00:25:42.298 killing process with pid 856256 00:25:42.298 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 856256 00:25:42.298 Received shutdown signal, test time was about 2.000000 seconds 00:25:42.298 00:25:42.298 Latency(us) 00:25:42.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.298 =================================================================================================================== 00:25:42.298 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:42.298 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 856256 00:25:42.555 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:42.555 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:42.556 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:42.556 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:42.556 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:42.556 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=856677 00:25:42.556 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:42.556 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 856677 /var/tmp/bperf.sock 00:25:42.556 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 856677 ']' 00:25:42.556 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:42.556 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:42.556 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:42.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:42.556 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:42.556 16:02:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:42.556 [2024-07-12 16:02:39.724516] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:42.556 [2024-07-12 16:02:39.724609] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid856677 ] 00:25:42.556 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.556 [2024-07-12 16:02:39.786053] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.813 [2024-07-12 16:02:39.894054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.813 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.813 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:42.813 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:42.813 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:43.070 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:43.070 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.070 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:43.070 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.070 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.070 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:43.636 nvme0n1 00:25:43.636 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:43.636 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.636 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:43.636 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.636 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:43.636 16:02:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:43.636 Running I/O for 2 seconds... 00:25:43.636 [2024-07-12 16:02:40.841390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e3498 00:25:43.636 [2024-07-12 16:02:40.842828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.636 [2024-07-12 16:02:40.842869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:43.636 [2024-07-12 16:02:40.853283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fb048 00:25:43.636 [2024-07-12 16:02:40.854672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.636 [2024-07-12 16:02:40.854699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:43.636 [2024-07-12 16:02:40.863965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f1430 00:25:43.636 [2024-07-12 16:02:40.865207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.636 [2024-07-12 16:02:40.865233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:43.636 [2024-07-12 16:02:40.875056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fc128 00:25:43.636 [2024-07-12 16:02:40.876363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.636 [2024-07-12 16:02:40.876399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:43.636 [2024-07-12 16:02:40.886223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f46d0 00:25:43.636 [2024-07-12 16:02:40.887009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.636 [2024-07-12 16:02:40.887045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:43.636 [2024-07-12 16:02:40.896708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e5ec8 00:25:43.636 [2024-07-12 16:02:40.897820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.636 [2024-07-12 16:02:40.897846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:43.636 [2024-07-12 16:02:40.907600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ee190 00:25:43.636 [2024-07-12 16:02:40.908712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.636 [2024-07-12 16:02:40.908768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:43.636 [2024-07-12 16:02:40.920771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fc998 00:25:43.636 [2024-07-12 16:02:40.922523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.636 [2024-07-12 16:02:40.922553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:43.636 [2024-07-12 16:02:40.928954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e99d8 00:25:43.894 [2024-07-12 16:02:40.929835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.894 [2024-07-12 16:02:40.929861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:43.894 [2024-07-12 16:02:40.942856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f2d80 00:25:43.894 [2024-07-12 16:02:40.944470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.894 [2024-07-12 16:02:40.944501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:43.894 [2024-07-12 16:02:40.952881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e2c28 00:25:43.894 [2024-07-12 16:02:40.954008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.894 [2024-07-12 16:02:40.954043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:43.894 [2024-07-12 16:02:40.962667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190feb58 00:25:43.895 [2024-07-12 16:02:40.964237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:40.964262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:40.974049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ed4e8 00:25:43.895 [2024-07-12 16:02:40.975670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:40.975695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:40.985027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ef6a8 00:25:43.895 [2024-07-12 16:02:40.986308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:40.986332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:40.995758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e4de8 00:25:43.895 [2024-07-12 16:02:40.996878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:40.996903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.007065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e6fa8 00:25:43.895 [2024-07-12 16:02:41.008409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.008441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.017181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ec840 00:25:43.895 [2024-07-12 16:02:41.018518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.018542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.028180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e4578 00:25:43.895 [2024-07-12 16:02:41.029065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.029090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.040512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e6738 00:25:43.895 [2024-07-12 16:02:41.042237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.042266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.048173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e4de8 00:25:43.895 [2024-07-12 16:02:41.048901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.048928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.060533] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fe2e8 00:25:43.895 [2024-07-12 16:02:41.061423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.061448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.070599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e5658 00:25:43.895 [2024-07-12 16:02:41.072118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.072142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.081554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f20d8 00:25:43.895 [2024-07-12 16:02:41.082748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.082773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.094312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ef6a8 00:25:43.895 [2024-07-12 16:02:41.096302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.096340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.102599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ef6a8 00:25:43.895 [2024-07-12 16:02:41.103548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.103577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.114187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e5a90 00:25:43.895 [2024-07-12 16:02:41.115188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.115213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.125060] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f5378 00:25:43.895 [2024-07-12 16:02:41.125623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.125648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.138620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190dfdc0 00:25:43.895 [2024-07-12 16:02:41.140438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.140463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.146339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f8e88 00:25:43.895 [2024-07-12 16:02:41.147186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.147220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.156605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fbcf0 00:25:43.895 [2024-07-12 16:02:41.157391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.157420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.168578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fc998 00:25:43.895 [2024-07-12 16:02:41.169471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.169500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:43.895 [2024-07-12 16:02:41.179382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190eea00 00:25:43.895 [2024-07-12 16:02:41.180337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:43.895 [2024-07-12 16:02:41.180361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.190192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e6300 00:25:44.154 [2024-07-12 16:02:41.191219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.191248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.202475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e7c50 00:25:44.154 [2024-07-12 16:02:41.203511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.203535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.213826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e73e0 00:25:44.154 [2024-07-12 16:02:41.215242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.215269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.225317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e12d8 00:25:44.154 [2024-07-12 16:02:41.226850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.226880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.236959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f1868 00:25:44.154 [2024-07-12 16:02:41.238548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.238572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.248274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ea248 00:25:44.154 [2024-07-12 16:02:41.250054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.250085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.256004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fa3a0 00:25:44.154 [2024-07-12 16:02:41.256790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.256819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.267209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ed920 00:25:44.154 [2024-07-12 16:02:41.267858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.267884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.278468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fef90 00:25:44.154 [2024-07-12 16:02:41.279283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.279307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.289464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f9b30 00:25:44.154 [2024-07-12 16:02:41.290479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.290503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.300567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f20d8 00:25:44.154 [2024-07-12 16:02:41.301775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.301803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.310756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f4b08 00:25:44.154 [2024-07-12 16:02:41.311964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.311988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.324097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e1710 00:25:44.154 [2024-07-12 16:02:41.325890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.325915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.331943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f31b8 00:25:44.154 [2024-07-12 16:02:41.332846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.332870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.344963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e73e0 00:25:44.154 [2024-07-12 16:02:41.346427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.346459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.354835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f6458 00:25:44.154 [2024-07-12 16:02:41.355523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.355547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.367527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f81e0 00:25:44.154 [2024-07-12 16:02:41.369000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.369028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.378888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e3498 00:25:44.154 [2024-07-12 16:02:41.380481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.380509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.389885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f8618 00:25:44.154 [2024-07-12 16:02:41.391548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.391573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.397307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fb048 00:25:44.154 [2024-07-12 16:02:41.398136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.398160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.410390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fa3a0 00:25:44.154 [2024-07-12 16:02:41.411643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.411668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.421240] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f9f68 00:25:44.154 [2024-07-12 16:02:41.422600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.422630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.431319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fc128 00:25:44.154 [2024-07-12 16:02:41.432224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.432249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:44.154 [2024-07-12 16:02:41.441196] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fc998 00:25:44.154 [2024-07-12 16:02:41.442095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.154 [2024-07-12 16:02:41.442119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.455551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f0bc0 00:25:44.413 [2024-07-12 16:02:41.457056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.457085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.465572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fdeb0 00:25:44.413 [2024-07-12 16:02:41.466682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.466707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.476463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190de038 00:25:44.413 [2024-07-12 16:02:41.477417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.477442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.486611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e3498 00:25:44.413 [2024-07-12 16:02:41.488192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.488216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.497559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e49b0 00:25:44.413 [2024-07-12 16:02:41.498768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.498793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.508312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f4b08 00:25:44.413 [2024-07-12 16:02:41.509373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.509401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.519557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f31b8 00:25:44.413 [2024-07-12 16:02:41.520771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.520796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.530346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f35f0 00:25:44.413 [2024-07-12 16:02:41.531747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.531771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.541636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f7538 00:25:44.413 [2024-07-12 16:02:41.543166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.543197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.552595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e88f8 00:25:44.413 [2024-07-12 16:02:41.554123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.554147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.560574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e4de8 00:25:44.413 [2024-07-12 16:02:41.561377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.561401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.571865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f92c0 00:25:44.413 [2024-07-12 16:02:41.572828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.572855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.583180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f2948 00:25:44.413 [2024-07-12 16:02:41.584302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.584326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.594487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ff3c8 00:25:44.413 [2024-07-12 16:02:41.595782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.595807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.605929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f0ff8 00:25:44.413 [2024-07-12 16:02:41.606820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.606846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.619122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f2d80 00:25:44.413 [2024-07-12 16:02:41.620963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.620991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.626772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f0bc0 00:25:44.413 [2024-07-12 16:02:41.627654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.627685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.637706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e5ec8 00:25:44.413 [2024-07-12 16:02:41.638522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.638547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.647811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fb480 00:25:44.413 [2024-07-12 16:02:41.648620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.648643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.660098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e5658 00:25:44.413 [2024-07-12 16:02:41.661073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.661097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.671302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fc128 00:25:44.413 [2024-07-12 16:02:41.672420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.672443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.682336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fbcf0 00:25:44.413 [2024-07-12 16:02:41.683454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.683478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.693083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e6300 00:25:44.413 [2024-07-12 16:02:41.694260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.694285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:44.413 [2024-07-12 16:02:41.703555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fc560 00:25:44.413 [2024-07-12 16:02:41.704831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.413 [2024-07-12 16:02:41.704873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:44.671 [2024-07-12 16:02:41.715640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ee190 00:25:44.671 [2024-07-12 16:02:41.716803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-07-12 16:02:41.716830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:44.671 [2024-07-12 16:02:41.726550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f7538 00:25:44.671 [2024-07-12 16:02:41.727227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.671 [2024-07-12 16:02:41.727253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:44.671 [2024-07-12 16:02:41.737827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e1b48 00:25:44.671 [2024-07-12 16:02:41.738617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.738647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.749065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f81e0 00:25:44.672 [2024-07-12 16:02:41.750030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.750056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.759201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e5ec8 00:25:44.672 [2024-07-12 16:02:41.760752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.760777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.768537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e1710 00:25:44.672 [2024-07-12 16:02:41.769368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.769391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.779970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e7818 00:25:44.672 [2024-07-12 16:02:41.780910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.780935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.793007] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e1b48 00:25:44.672 [2024-07-12 16:02:41.794407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.794431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.801807] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f9f68 00:25:44.672 [2024-07-12 16:02:41.802603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.802627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.812816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e8d30 00:25:44.672 [2024-07-12 16:02:41.813749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.813774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.823075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fe2e8 00:25:44.672 [2024-07-12 16:02:41.824004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.824051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.836441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e49b0 00:25:44.672 [2024-07-12 16:02:41.838010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.838036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.846321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e1f80 00:25:44.672 [2024-07-12 16:02:41.848063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.848091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.856238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e95a0 00:25:44.672 [2024-07-12 16:02:41.857092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.857130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.868532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ea680 00:25:44.672 [2024-07-12 16:02:41.869549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.869573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.879484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f7da8 00:25:44.672 [2024-07-12 16:02:41.880495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.880520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.890758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fc128 00:25:44.672 [2024-07-12 16:02:41.891905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.891930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.900873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fcdd0 00:25:44.672 [2024-07-12 16:02:41.901875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.901899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.911269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fb8b8 00:25:44.672 [2024-07-12 16:02:41.912221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.912246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.922549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fac10 00:25:44.672 [2024-07-12 16:02:41.923696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.923736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.933607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f6458 00:25:44.672 [2024-07-12 16:02:41.934321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.934345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.946049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ec840 00:25:44.672 [2024-07-12 16:02:41.947743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.947770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:44.672 [2024-07-12 16:02:41.957566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e4578 00:25:44.672 [2024-07-12 16:02:41.959247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.672 [2024-07-12 16:02:41.959272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:44.931 [2024-07-12 16:02:41.969889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f7538 00:25:44.931 [2024-07-12 16:02:41.971748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.931 [2024-07-12 16:02:41.971775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:44.931 [2024-07-12 16:02:41.977627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f57b0 00:25:44.931 [2024-07-12 16:02:41.978500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.931 [2024-07-12 16:02:41.978524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:44.931 [2024-07-12 16:02:41.989870] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e6738 00:25:44.931 [2024-07-12 16:02:41.991288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:41.991312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.001162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e4140 00:25:44.932 [2024-07-12 16:02:42.002681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.002705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.012434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fda78 00:25:44.932 [2024-07-12 16:02:42.014130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.014166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.023376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f6020 00:25:44.932 [2024-07-12 16:02:42.025049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.025079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.030868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190df550 00:25:44.932 [2024-07-12 16:02:42.031670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.031695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.042156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f8a50 00:25:44.932 [2024-07-12 16:02:42.043081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.043106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.053181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e4de8 00:25:44.932 [2024-07-12 16:02:42.054142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.054167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.064329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f2948 00:25:44.932 [2024-07-12 16:02:42.065308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.065333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.074588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e6fa8 00:25:44.932 [2024-07-12 16:02:42.075577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.075611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.086700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e23b8 00:25:44.932 [2024-07-12 16:02:42.087842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.087868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.097953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f2948 00:25:44.932 [2024-07-12 16:02:42.099348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.099375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.108655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f35f0 00:25:44.932 [2024-07-12 16:02:42.109785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.109811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.119999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ed4e8 00:25:44.932 [2024-07-12 16:02:42.121116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.121150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.131218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f7970 00:25:44.932 [2024-07-12 16:02:42.132468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.132491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.142190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e9168 00:25:44.932 [2024-07-12 16:02:42.143466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.143490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.153076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e88f8 00:25:44.932 [2024-07-12 16:02:42.154343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.154367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.164332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ea680 00:25:44.932 [2024-07-12 16:02:42.165700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.165745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.174540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fbcf0 00:25:44.932 [2024-07-12 16:02:42.175953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.175977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.184669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fb8b8 00:25:44.932 [2024-07-12 16:02:42.185641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.185665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.194587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f6458 00:25:44.932 [2024-07-12 16:02:42.195549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.195573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.206627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e7818 00:25:44.932 [2024-07-12 16:02:42.207771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.207796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:44.932 [2024-07-12 16:02:42.217853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fb8b8 00:25:44.932 [2024-07-12 16:02:42.219100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:44.932 [2024-07-12 16:02:42.219124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:45.191 [2024-07-12 16:02:42.229138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f96f8 00:25:45.191 [2024-07-12 16:02:42.230269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.191 [2024-07-12 16:02:42.230294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:45.191 [2024-07-12 16:02:42.240349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190eb328 00:25:45.191 [2024-07-12 16:02:42.241469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.191 [2024-07-12 16:02:42.241494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:45.191 [2024-07-12 16:02:42.251588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ebfd0 00:25:45.191 [2024-07-12 16:02:42.252815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.191 [2024-07-12 16:02:42.252840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:45.191 [2024-07-12 16:02:42.260978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fe720 00:25:45.191 [2024-07-12 16:02:42.261685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.191 [2024-07-12 16:02:42.261709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:45.191 [2024-07-12 16:02:42.272087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ec408 00:25:45.191 [2024-07-12 16:02:42.272988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.191 [2024-07-12 16:02:42.273013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:45.191 [2024-07-12 16:02:42.282226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f46d0 00:25:45.191 [2024-07-12 16:02:42.282984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.191 [2024-07-12 16:02:42.283009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:45.191 [2024-07-12 16:02:42.292627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f6458 00:25:45.191 [2024-07-12 16:02:42.293386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.191 [2024-07-12 16:02:42.293410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:45.191 [2024-07-12 16:02:42.304707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e01f8 00:25:45.191 [2024-07-12 16:02:42.305644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.191 [2024-07-12 16:02:42.305683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:45.191 [2024-07-12 16:02:42.315908] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190de8a8 00:25:45.191 [2024-07-12 16:02:42.316952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.191 [2024-07-12 16:02:42.316977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:45.191 [2024-07-12 16:02:42.326934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e6738 00:25:45.192 [2024-07-12 16:02:42.327993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.328018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.338176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e3d08 00:25:45.192 [2024-07-12 16:02:42.339355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.339379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.349136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e6fa8 00:25:45.192 [2024-07-12 16:02:42.350446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.350473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.360880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f96f8 00:25:45.192 [2024-07-12 16:02:42.362264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.362289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.371122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f8e88 00:25:45.192 [2024-07-12 16:02:42.372298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.372323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.382344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e1b48 00:25:45.192 [2024-07-12 16:02:42.383534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.383558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.393545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e9e10 00:25:45.192 [2024-07-12 16:02:42.394869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.394894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.403631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e6b70 00:25:45.192 [2024-07-12 16:02:42.404804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.404830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.414885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fd640 00:25:45.192 [2024-07-12 16:02:42.416076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.416103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.425758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e7c50 00:25:45.192 [2024-07-12 16:02:42.426943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.426968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.436953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e12d8 00:25:45.192 [2024-07-12 16:02:42.438385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.438409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.448049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f20d8 00:25:45.192 [2024-07-12 16:02:42.449380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.449404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.458221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190df988 00:25:45.192 [2024-07-12 16:02:42.459517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.459541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.469340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f1430 00:25:45.192 [2024-07-12 16:02:42.470640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.470664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:45.192 [2024-07-12 16:02:42.480640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e6738 00:25:45.192 [2024-07-12 16:02:42.482005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.192 [2024-07-12 16:02:42.482047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.491676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190df118 00:25:45.451 [2024-07-12 16:02:42.493005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:4081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.493050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.501808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e9168 00:25:45.451 [2024-07-12 16:02:42.502661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.502686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.512548] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e3060 00:25:45.451 [2024-07-12 16:02:42.513433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.513457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.523664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e88f8 00:25:45.451 [2024-07-12 16:02:42.524445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.524469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.534805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e6b70 00:25:45.451 [2024-07-12 16:02:42.535824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.535849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.545641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f81e0 00:25:45.451 [2024-07-12 16:02:42.546678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.546703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.556472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e1f80 00:25:45.451 [2024-07-12 16:02:42.557493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.557517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.567404] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ebb98 00:25:45.451 [2024-07-12 16:02:42.568438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.568462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.579677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ec408 00:25:45.451 [2024-07-12 16:02:42.581281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.581305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.590863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e73e0 00:25:45.451 [2024-07-12 16:02:42.592433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.592462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.599918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f1868 00:25:45.451 [2024-07-12 16:02:42.600763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.600789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.611503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f6020 00:25:45.451 [2024-07-12 16:02:42.612564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.612588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.622956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f57b0 00:25:45.451 [2024-07-12 16:02:42.624127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.624152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.633168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f8e88 00:25:45.451 [2024-07-12 16:02:42.634179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.634204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.643555] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fac10 00:25:45.451 [2024-07-12 16:02:42.644556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.644592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.654889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fb048 00:25:45.451 [2024-07-12 16:02:42.656034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.656074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.666794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f92c0 00:25:45.451 [2024-07-12 16:02:42.668169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.668194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.678684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e5ec8 00:25:45.451 [2024-07-12 16:02:42.680204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.680229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.689279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fac10 00:25:45.451 [2024-07-12 16:02:42.690326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.690356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:45.451 [2024-07-12 16:02:42.699680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e3d08 00:25:45.451 [2024-07-12 16:02:42.700715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.451 [2024-07-12 16:02:42.700766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:45.452 [2024-07-12 16:02:42.712269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e23b8 00:25:45.452 [2024-07-12 16:02:42.713426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.452 [2024-07-12 16:02:42.713453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:45.452 [2024-07-12 16:02:42.722701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f31b8 00:25:45.452 [2024-07-12 16:02:42.723855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.452 [2024-07-12 16:02:42.723882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:45.452 [2024-07-12 16:02:42.734146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190f2948 00:25:45.452 [2024-07-12 16:02:42.735307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.452 [2024-07-12 16:02:42.735333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:45.711 [2024-07-12 16:02:42.746244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fef90 00:25:45.711 [2024-07-12 16:02:42.747565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.711 [2024-07-12 16:02:42.747592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:45.711 [2024-07-12 16:02:42.756937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ff3c8 00:25:45.711 [2024-07-12 16:02:42.757980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.711 [2024-07-12 16:02:42.758007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:45.711 [2024-07-12 16:02:42.769950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190ff3c8 00:25:45.711 [2024-07-12 16:02:42.771555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.711 [2024-07-12 16:02:42.771581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:45.711 [2024-07-12 16:02:42.780501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fdeb0 00:25:45.711 [2024-07-12 16:02:42.781712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.711 [2024-07-12 16:02:42.781759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:45.711 [2024-07-12 16:02:42.791903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190fe720 00:25:45.711 [2024-07-12 16:02:42.792947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.711 [2024-07-12 16:02:42.792973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.711 [2024-07-12 16:02:42.804735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190eaab8 00:25:45.711 [2024-07-12 16:02:42.806629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.711 [2024-07-12 16:02:42.806654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:45.711 [2024-07-12 16:02:42.812684] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e9168 00:25:45.711 [2024-07-12 16:02:42.813580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.711 [2024-07-12 16:02:42.813605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:45.711 [2024-07-12 16:02:42.825527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1a28b60) with pdu=0x2000190e8d30 00:25:45.711 [2024-07-12 16:02:42.826789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:45.711 [2024-07-12 16:02:42.826815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:45.711 00:25:45.711 Latency(us) 00:25:45.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.711 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:45.711 nvme0n1 : 2.00 23282.06 90.95 0.00 0.00 5491.97 2208.81 15146.10 00:25:45.711 =================================================================================================================== 00:25:45.711 Total : 23282.06 90.95 0.00 0.00 5491.97 2208.81 15146.10 00:25:45.711 0 00:25:45.711 16:02:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:45.711 16:02:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:45.711 16:02:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:45.711 16:02:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:45.711 | .driver_specific 00:25:45.711 | .nvme_error 00:25:45.711 | .status_code 00:25:45.711 | .command_transient_transport_error' 00:25:45.968 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 182 > 0 )) 00:25:45.968 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 856677 00:25:45.968 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 856677 ']' 00:25:45.968 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 856677 00:25:45.968 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:45.968 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:45.968 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 856677 00:25:45.968 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:45.968 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:45.968 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 856677' 00:25:45.968 killing process with pid 856677 00:25:45.968 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 856677 00:25:45.968 Received shutdown signal, test time was about 2.000000 seconds 00:25:45.968 00:25:45.968 Latency(us) 00:25:45.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:45.968 =================================================================================================================== 00:25:45.968 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:45.968 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 856677 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=857192 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 857192 /var/tmp/bperf.sock 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 857192 ']' 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:46.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:46.225 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:46.225 [2024-07-12 16:02:43.445036] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:46.225 [2024-07-12 16:02:43.445115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid857192 ] 00:25:46.225 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:46.225 Zero copy mechanism will not be used. 00:25:46.225 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.225 [2024-07-12 16:02:43.503145] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.482 [2024-07-12 16:02:43.611011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.482 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.482 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:25:46.482 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:46.482 16:02:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:46.739 16:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:46.739 16:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.739 16:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:46.739 16:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.739 16:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:46.739 16:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:47.304 nvme0n1 00:25:47.304 16:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:47.304 16:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.304 16:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:47.304 16:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.304 16:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:47.304 16:02:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:47.562 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:47.562 Zero copy mechanism will not be used. 00:25:47.562 Running I/O for 2 seconds... 00:25:47.562 [2024-07-12 16:02:44.630260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.562 [2024-07-12 16:02:44.630606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.562 [2024-07-12 16:02:44.630641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.562 [2024-07-12 16:02:44.636259] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.562 [2024-07-12 16:02:44.636534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.562 [2024-07-12 16:02:44.636561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.562 [2024-07-12 16:02:44.642226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.562 [2024-07-12 16:02:44.642520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.562 [2024-07-12 16:02:44.642547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.562 [2024-07-12 16:02:44.648121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.562 [2024-07-12 16:02:44.648432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.562 [2024-07-12 16:02:44.648459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.562 [2024-07-12 16:02:44.653994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.654369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.654396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.659727] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.660038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.660085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.665353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.665662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.665688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.671574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.671859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.671887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.677660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.677959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.677987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.683405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.683672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.683698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.689133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.689401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.689426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.694816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.695112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.695137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.701822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.702119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.702144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.707437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.707693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.707733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.712932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.713251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.713277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.718705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.718999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.719026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.724299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.724560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.724586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.729822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.730120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.730146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.735687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.735996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.736039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.742437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.742751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.742782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.748326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.748612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.748638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.754218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.754481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.754507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.760077] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.760347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.760373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.766002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.766305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.766331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.771805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.772116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.772142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.778700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.779037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.779065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.784620] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.784905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.784932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.790280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.790550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.790576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.795927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.796206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.796231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.801777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.802071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.802097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.808143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.808402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.808428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.814572] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.814864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.814896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.820896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.563 [2024-07-12 16:02:44.821186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.563 [2024-07-12 16:02:44.821212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.563 [2024-07-12 16:02:44.828585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.564 [2024-07-12 16:02:44.828894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.564 [2024-07-12 16:02:44.828928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.564 [2024-07-12 16:02:44.836110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.564 [2024-07-12 16:02:44.836280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.564 [2024-07-12 16:02:44.836306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.564 [2024-07-12 16:02:44.842430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.564 [2024-07-12 16:02:44.842688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.564 [2024-07-12 16:02:44.842714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.564 [2024-07-12 16:02:44.848130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.564 [2024-07-12 16:02:44.848390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.564 [2024-07-12 16:02:44.848416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.564 [2024-07-12 16:02:44.854158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.564 [2024-07-12 16:02:44.854461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.564 [2024-07-12 16:02:44.854512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.861413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.861676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.861702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.868491] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.868815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.868842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.876142] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.876409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.876436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.882329] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.882722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.882759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.888584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.888893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.888921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.894381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.894681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.894707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.900312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.900572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.900597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.905823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.906106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.906132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.911590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.911881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.911908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.917104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.917365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.917402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.923362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.923620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.923650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.929901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.930186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.930212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.936270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.936533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.936558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.941792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.942079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.942105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.947242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.947509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.947534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.952801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.953154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.953180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.958412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.958679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.958705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.963958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.964243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.964268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.823 [2024-07-12 16:02:44.969393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.823 [2024-07-12 16:02:44.969655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.823 [2024-07-12 16:02:44.969692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:44.975356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:44.975630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:44.975656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:44.981683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:44.981988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:44.982014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:44.987368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:44.987637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:44.987663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:44.992910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:44.993191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:44.993217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:44.998529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:44.998816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:44.998842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.003997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.004273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.004298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.009525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.009817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.009853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.015147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.015405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.015430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.020769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.021038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.021078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.027409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.027666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.027692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.034070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.034357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.034383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.040822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.041103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.041129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.047223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.047485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.047511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.052964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.053287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.053313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.058509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.058791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.058818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.064136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.064399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.064425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.069774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.070060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.070086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.075319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.075579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.075609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.081374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.081614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.081640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.087579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.087874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.087900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.093260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.093534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.093559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.098839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.099190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.099216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.104330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.104643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.104669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:47.824 [2024-07-12 16:02:45.110002] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:47.824 [2024-07-12 16:02:45.110271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.824 [2024-07-12 16:02:45.110297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.083 [2024-07-12 16:02:45.116307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.083 [2024-07-12 16:02:45.116626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.083 [2024-07-12 16:02:45.116653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.083 [2024-07-12 16:02:45.122995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.083 [2024-07-12 16:02:45.123271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.083 [2024-07-12 16:02:45.123297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.083 [2024-07-12 16:02:45.128636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.083 [2024-07-12 16:02:45.128930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.083 [2024-07-12 16:02:45.128957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.083 [2024-07-12 16:02:45.134301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.083 [2024-07-12 16:02:45.134615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.083 [2024-07-12 16:02:45.134642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.083 [2024-07-12 16:02:45.140422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.083 [2024-07-12 16:02:45.140699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.140745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.146818] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.147116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.147141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.153642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.153935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.153962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.159930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.160204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.160230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.165573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.165876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.165919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.171219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.171547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.171573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.176933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.177240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.177266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.182501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.182779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.182805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.188000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.188274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.188299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.193484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.193760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.193786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.199078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.199352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.199377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.204586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.204881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.204907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.210233] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.210506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.210533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.216102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.216364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.216389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.221770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.222103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.222139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.227615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.227913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.227946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.234237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.234532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.234558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.240745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.241072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.241112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.246596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.246892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.246919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.253072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.253365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.253391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.259390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.259649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.259675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.265508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.265807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.265834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.271185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.084 [2024-07-12 16:02:45.271481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.084 [2024-07-12 16:02:45.271506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.084 [2024-07-12 16:02:45.276640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.276962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.276989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.282164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.282525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.282551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.287880] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.288161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.288186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.293497] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.293816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.293842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.300119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.300415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.300441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.306643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.306933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.306960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.313603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.313893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.313920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.319924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.320199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.320226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.326417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.326678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.326704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.333623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.333916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.333945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.340154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.340421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.340448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.347353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.347641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.347668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.354262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.354520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.354545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.361193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.361548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.361576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.367769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.368067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.368108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.085 [2024-07-12 16:02:45.374197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.085 [2024-07-12 16:02:45.374546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.085 [2024-07-12 16:02:45.374576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.344 [2024-07-12 16:02:45.381336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.344 [2024-07-12 16:02:45.381658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.344 [2024-07-12 16:02:45.381685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.344 [2024-07-12 16:02:45.387428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.344 [2024-07-12 16:02:45.387721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.344 [2024-07-12 16:02:45.387773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.344 [2024-07-12 16:02:45.393701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.344 [2024-07-12 16:02:45.394040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.344 [2024-07-12 16:02:45.394074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.344 [2024-07-12 16:02:45.399631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.344 [2024-07-12 16:02:45.399940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.344 [2024-07-12 16:02:45.399970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.344 [2024-07-12 16:02:45.405587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.344 [2024-07-12 16:02:45.405914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.344 [2024-07-12 16:02:45.405943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.411488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.411967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.411997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.417542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.417830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.417858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.423303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.423558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.423583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.428992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.429292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.429318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.434816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.435128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.435155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.440659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.440959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.440989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.446623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.446930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.446960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.452478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.452770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.452800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.459237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.459496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.459522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.465703] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.466062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.466088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.471509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.471787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.471814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.477284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.477553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.477579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.483107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.483387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.483414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.488994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.489282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.489308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.496039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.496313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.496347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.502744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.503015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.503043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.510517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.510811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.510839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.517857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.518133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.518160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.524301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.524558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.524584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.530358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.530617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.530643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.536503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.536783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.536810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.542773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.543040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.543081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.549041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.549316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.549344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.555285] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.555551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.555577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.561527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.561810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.345 [2024-07-12 16:02:45.561838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.345 [2024-07-12 16:02:45.567832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.345 [2024-07-12 16:02:45.568162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.346 [2024-07-12 16:02:45.568188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.346 [2024-07-12 16:02:45.574223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.346 [2024-07-12 16:02:45.574489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.346 [2024-07-12 16:02:45.574516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.346 [2024-07-12 16:02:45.580451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.346 [2024-07-12 16:02:45.580708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.346 [2024-07-12 16:02:45.580735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.346 [2024-07-12 16:02:45.586927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.346 [2024-07-12 16:02:45.587209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.346 [2024-07-12 16:02:45.587235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.346 [2024-07-12 16:02:45.593227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.346 [2024-07-12 16:02:45.593494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.346 [2024-07-12 16:02:45.593521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.346 [2024-07-12 16:02:45.599467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.346 [2024-07-12 16:02:45.599727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.346 [2024-07-12 16:02:45.599777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.346 [2024-07-12 16:02:45.605633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.346 [2024-07-12 16:02:45.605921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.346 [2024-07-12 16:02:45.605948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.346 [2024-07-12 16:02:45.612097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.346 [2024-07-12 16:02:45.612359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.346 [2024-07-12 16:02:45.612385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.346 [2024-07-12 16:02:45.619587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.346 [2024-07-12 16:02:45.619878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.346 [2024-07-12 16:02:45.619906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.346 [2024-07-12 16:02:45.628116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.346 [2024-07-12 16:02:45.628380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.346 [2024-07-12 16:02:45.628407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.346 [2024-07-12 16:02:45.636618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.346 [2024-07-12 16:02:45.636965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.346 [2024-07-12 16:02:45.636994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.645127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.645392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.645419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.652359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.652691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.652732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.658979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.659308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.659336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.665603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.665909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.665937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.672237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.672524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.672556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.678907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.679237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.679264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.685377] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.685643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.685670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.692078] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.692343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.692370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.698574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.698866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.698895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.705203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.705482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.705508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.711465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.711753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.711780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.718926] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.719210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.719237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.726993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.727261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.727288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.736081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.736352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.736379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.743486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.743773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.743801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.751202] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.751464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.751491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.757914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.758218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.758245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.605 [2024-07-12 16:02:45.764785] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.605 [2024-07-12 16:02:45.765083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.605 [2024-07-12 16:02:45.765123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.771321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.771595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.771622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.777559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.777846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.777872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.784049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.784345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.784372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.790367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.790632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.790658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.796676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.796971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.796999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.803076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.803342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.803370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.809508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.809797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.809824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.816121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.816414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.816439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.822505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.822797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.822824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.828765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.829079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.829105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.835130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.835396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.835422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.841358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.841625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.841650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.847599] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.847892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.847924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.853898] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.854185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.854211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.860191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.860458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.860485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.866694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.867071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.867097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.874313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.874577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.874603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.881212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.881310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.881334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.888679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.888984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.889011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.606 [2024-07-12 16:02:45.896842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.606 [2024-07-12 16:02:45.897153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.606 [2024-07-12 16:02:45.897181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.904339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.904606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.904633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.911433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.911750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.911779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.918945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.919236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.919263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.925508] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.925799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.925827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.931760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.932038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.932080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.938040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.938382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.938409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.944431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.944697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.944745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.950489] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.950781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.950808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.956668] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.957009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.957036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.962951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.963289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.963320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.969185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.969452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.969479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.975371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.975636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.975662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.981635] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.981927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.981955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.987938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.988235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.988262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:45.994221] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.866 [2024-07-12 16:02:45.994488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.866 [2024-07-12 16:02:45.994515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.866 [2024-07-12 16:02:46.000468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.000742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.000783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.006670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.006969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.006998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.012927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.013214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.013240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.019359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.019642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.019669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.027074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.027341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.027368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.034556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.034874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.034903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.041451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.041717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.041767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.048486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.048772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.048800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.055767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.056057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.056084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.062109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.062377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.062404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.068552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.068845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.068872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.075003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.075297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.075323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.081432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.081695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.081743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.087686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.087988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.088015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.094066] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.094346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.094373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.100889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.101176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.101203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.108215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.108484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.108511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.115021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.115316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.115343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.121935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.122221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.122247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.129074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.129396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.129423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.135623] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.135919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.135951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.142566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.142875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.867 [2024-07-12 16:02:46.142918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:48.867 [2024-07-12 16:02:46.150126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.867 [2024-07-12 16:02:46.150393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.868 [2024-07-12 16:02:46.150420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:48.868 [2024-07-12 16:02:46.157673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:48.868 [2024-07-12 16:02:46.157982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:48.868 [2024-07-12 16:02:46.158012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.165651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.165954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.127 [2024-07-12 16:02:46.165983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.173872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.174214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.127 [2024-07-12 16:02:46.174242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.181366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.181476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.127 [2024-07-12 16:02:46.181500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.190137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.190405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.127 [2024-07-12 16:02:46.190432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.198084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.198354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.127 [2024-07-12 16:02:46.198381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.205843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.206175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.127 [2024-07-12 16:02:46.206203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.214134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.214409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.127 [2024-07-12 16:02:46.214435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.222361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.222627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.127 [2024-07-12 16:02:46.222654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.230272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.230561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.127 [2024-07-12 16:02:46.230588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.237334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.237600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.127 [2024-07-12 16:02:46.237627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.243929] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.244216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.127 [2024-07-12 16:02:46.244243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.250403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.250675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.127 [2024-07-12 16:02:46.250702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.127 [2024-07-12 16:02:46.257038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.127 [2024-07-12 16:02:46.257333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.257360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.263657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.263949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.263977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.270290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.270558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.270584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.276893] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.277181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.277208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.283437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.283784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.283811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.290206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.290532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.290559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.296917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.297213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.297239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.302698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.302984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.303011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.308393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.308648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.308673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.314904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.315182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.315208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.321347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.321632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.321663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.328015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.328320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.328345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.334127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.334417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.334443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.339690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.339975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.340001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.345336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.345629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.345656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.351130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.351387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.351413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.356687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.356973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.357000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.362292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.362546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.362572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.368468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.368829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.368856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.375185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.375459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.375495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.380822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.381097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.381123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.386578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.386866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.386893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.392270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.392524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.392549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.398173] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.398464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.398492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.404550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.404849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.404877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.410641] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.410931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.410960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.128 [2024-07-12 16:02:46.417117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.128 [2024-07-12 16:02:46.417373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.128 [2024-07-12 16:02:46.417399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.424051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.424315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.424342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.430682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.431017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.431057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.437089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.437344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.437370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.443878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.444173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.444199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.451260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.451516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.451542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.459353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.459620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.459646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.467179] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.467435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.467461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.474423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.474679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.474705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.481952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.482250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.482277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.490480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.490798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.490826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.497950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.498244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.498271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.504619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.504902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.504929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.511203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.511458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.511485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.516927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.517224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.517251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.522847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.523175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.523202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.528617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.528912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.528941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.534478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.534770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.534798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.540346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.540597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.540623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.545859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.546183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.546209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.551559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.551838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.551865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.557177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.557431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.557457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.562854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.563146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.563171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.568886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.569179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.569205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.575048] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.575331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.575356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.580889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.581153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.581191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.587387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.587655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.587680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.593609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.389 [2024-07-12 16:02:46.593898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.389 [2024-07-12 16:02:46.593928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.389 [2024-07-12 16:02:46.599825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.390 [2024-07-12 16:02:46.600100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-07-12 16:02:46.600126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.390 [2024-07-12 16:02:46.605977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.390 [2024-07-12 16:02:46.606253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-07-12 16:02:46.606279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:49.390 [2024-07-12 16:02:46.612519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.390 [2024-07-12 16:02:46.612798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-07-12 16:02:46.612833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:49.390 [2024-07-12 16:02:46.618120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.390 [2024-07-12 16:02:46.618380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-07-12 16:02:46.618405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:49.390 [2024-07-12 16:02:46.623640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x185de60) with pdu=0x2000190fef90 00:25:49.390 [2024-07-12 16:02:46.623942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.390 [2024-07-12 16:02:46.623975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:49.390 00:25:49.390 Latency(us) 00:25:49.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.390 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:49.390 nvme0n1 : 2.00 4859.80 607.48 0.00 0.00 3285.56 2560.76 8980.86 00:25:49.390 =================================================================================================================== 00:25:49.390 Total : 4859.80 607.48 0.00 0.00 3285.56 2560.76 8980.86 00:25:49.390 0 00:25:49.390 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:49.390 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:49.390 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:49.390 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:49.390 | .driver_specific 00:25:49.390 | .nvme_error 00:25:49.390 | .status_code 00:25:49.390 | .command_transient_transport_error' 00:25:49.647 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 313 > 0 )) 00:25:49.647 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 857192 00:25:49.647 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 857192 ']' 00:25:49.647 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 857192 00:25:49.647 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:49.647 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.647 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 857192 00:25:49.647 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:49.647 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:49.647 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 857192' 00:25:49.648 killing process with pid 857192 00:25:49.648 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 857192 00:25:49.648 Received shutdown signal, test time was about 2.000000 seconds 00:25:49.648 00:25:49.648 Latency(us) 00:25:49.648 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.648 =================================================================================================================== 00:25:49.648 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.648 16:02:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 857192 00:25:49.905 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 855731 00:25:49.905 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 855731 ']' 00:25:49.905 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 855731 00:25:49.905 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:25:49.905 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:49.905 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 855731 00:25:49.905 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:49.905 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:49.905 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 855731' 00:25:49.905 killing process with pid 855731 00:25:49.905 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 855731 00:25:49.905 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 855731 00:25:50.165 00:25:50.165 real 0m15.682s 00:25:50.165 user 0m30.458s 00:25:50.165 sys 0m5.111s 00:25:50.165 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:50.165 16:02:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:50.165 ************************************ 00:25:50.165 END TEST nvmf_digest_error 00:25:50.165 ************************************ 00:25:50.165 16:02:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:25:50.165 16:02:47 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:50.165 16:02:47 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:50.165 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.165 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:25:50.165 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.165 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:25:50.165 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.165 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.425 rmmod nvme_tcp 00:25:50.425 rmmod nvme_fabrics 00:25:50.425 rmmod nvme_keyring 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 855731 ']' 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 855731 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 855731 ']' 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 855731 00:25:50.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (855731) - No such process 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 855731 is not found' 00:25:50.425 Process with pid 855731 is not found 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:50.425 16:02:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.351 16:02:49 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:52.351 00:25:52.351 real 0m35.727s 00:25:52.351 user 1m1.417s 00:25:52.351 sys 0m11.843s 00:25:52.351 16:02:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:52.351 16:02:49 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:52.351 ************************************ 00:25:52.351 END TEST nvmf_digest 00:25:52.351 ************************************ 00:25:52.351 16:02:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:52.351 16:02:49 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:25:52.351 16:02:49 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:25:52.351 16:02:49 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:25:52.351 16:02:49 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:52.351 16:02:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:52.351 16:02:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:52.351 16:02:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:52.351 ************************************ 00:25:52.351 START TEST nvmf_bdevperf 00:25:52.351 ************************************ 00:25:52.351 16:02:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:52.609 * Looking for test storage... 00:25:52.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.609 16:02:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:52.610 16:02:49 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:54.507 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:54.507 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:54.507 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:54.508 Found net devices under 0000:84:00.0: cvl_0_0 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:54.508 Found net devices under 0000:84:00.1: cvl_0_1 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.508 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:54.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:25:54.766 00:25:54.766 --- 10.0.0.2 ping statistics --- 00:25:54.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.766 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:25:54.766 00:25:54.766 --- 10.0.0.1 ping statistics --- 00:25:54.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.766 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=859563 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 859563 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 859563 ']' 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:54.766 16:02:51 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:54.766 [2024-07-12 16:02:51.970857] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:54.766 [2024-07-12 16:02:51.970957] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.766 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.766 [2024-07-12 16:02:52.034881] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:55.024 [2024-07-12 16:02:52.138411] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.024 [2024-07-12 16:02:52.138461] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.024 [2024-07-12 16:02:52.138484] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.024 [2024-07-12 16:02:52.138495] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.024 [2024-07-12 16:02:52.138504] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.024 [2024-07-12 16:02:52.138584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.024 [2024-07-12 16:02:52.138692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:55.024 [2024-07-12 16:02:52.138695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:55.024 [2024-07-12 16:02:52.277219] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.024 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:55.281 Malloc0 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:55.281 [2024-07-12 16:02:52.345547] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:55.281 { 00:25:55.281 "params": { 00:25:55.281 "name": "Nvme$subsystem", 00:25:55.281 "trtype": "$TEST_TRANSPORT", 00:25:55.281 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:55.281 "adrfam": "ipv4", 00:25:55.281 "trsvcid": "$NVMF_PORT", 00:25:55.281 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:55.281 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:55.281 "hdgst": ${hdgst:-false}, 00:25:55.281 "ddgst": ${ddgst:-false} 00:25:55.281 }, 00:25:55.281 "method": "bdev_nvme_attach_controller" 00:25:55.281 } 00:25:55.281 EOF 00:25:55.281 )") 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:55.281 16:02:52 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:55.281 "params": { 00:25:55.281 "name": "Nvme1", 00:25:55.281 "trtype": "tcp", 00:25:55.281 "traddr": "10.0.0.2", 00:25:55.281 "adrfam": "ipv4", 00:25:55.281 "trsvcid": "4420", 00:25:55.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:55.282 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:55.282 "hdgst": false, 00:25:55.282 "ddgst": false 00:25:55.282 }, 00:25:55.282 "method": "bdev_nvme_attach_controller" 00:25:55.282 }' 00:25:55.282 [2024-07-12 16:02:52.389531] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:55.282 [2024-07-12 16:02:52.389610] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859709 ] 00:25:55.282 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.282 [2024-07-12 16:02:52.450030] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.282 [2024-07-12 16:02:52.570279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.539 Running I/O for 1 seconds... 00:25:56.910 00:25:56.910 Latency(us) 00:25:56.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.910 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:56.910 Verification LBA range: start 0x0 length 0x4000 00:25:56.910 Nvme1n1 : 1.00 8234.67 32.17 0.00 0.00 15473.80 1413.88 15243.19 00:25:56.910 =================================================================================================================== 00:25:56.910 Total : 8234.67 32.17 0.00 0.00 15473.80 1413.88 15243.19 00:25:56.910 16:02:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=859850 00:25:56.910 16:02:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:56.910 16:02:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:56.910 16:02:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:56.910 16:02:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:25:56.910 16:02:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:25:56.910 16:02:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:56.910 16:02:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:56.910 { 00:25:56.910 "params": { 00:25:56.910 "name": "Nvme$subsystem", 00:25:56.910 "trtype": "$TEST_TRANSPORT", 00:25:56.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:56.910 "adrfam": "ipv4", 00:25:56.910 "trsvcid": "$NVMF_PORT", 00:25:56.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:56.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:56.910 "hdgst": ${hdgst:-false}, 00:25:56.910 "ddgst": ${ddgst:-false} 00:25:56.910 }, 00:25:56.910 "method": "bdev_nvme_attach_controller" 00:25:56.910 } 00:25:56.910 EOF 00:25:56.910 )") 00:25:56.910 16:02:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:25:56.910 16:02:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:25:56.910 16:02:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:25:56.910 16:02:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:56.910 "params": { 00:25:56.910 "name": "Nvme1", 00:25:56.910 "trtype": "tcp", 00:25:56.910 "traddr": "10.0.0.2", 00:25:56.910 "adrfam": "ipv4", 00:25:56.910 "trsvcid": "4420", 00:25:56.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:56.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:56.910 "hdgst": false, 00:25:56.910 "ddgst": false 00:25:56.910 }, 00:25:56.910 "method": "bdev_nvme_attach_controller" 00:25:56.910 }' 00:25:56.910 [2024-07-12 16:02:54.082751] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:25:56.910 [2024-07-12 16:02:54.082841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid859850 ] 00:25:56.910 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.910 [2024-07-12 16:02:54.143876] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.168 [2024-07-12 16:02:54.252989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.426 Running I/O for 15 seconds... 00:25:59.957 16:02:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 859563 00:25:59.957 16:02:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:59.957 [2024-07-12 16:02:57.052235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.957 [2024-07-12 16:02:57.052282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.957 [2024-07-12 16:02:57.052310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.957 [2024-07-12 16:02:57.052324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.957 [2024-07-12 16:02:57.052339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.957 [2024-07-12 16:02:57.052351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.957 [2024-07-12 16:02:57.052365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.957 [2024-07-12 16:02:57.052377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.957 [2024-07-12 16:02:57.052391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.957 [2024-07-12 16:02:57.052403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.957 [2024-07-12 16:02:57.052418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.957 [2024-07-12 16:02:57.052430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.957 [2024-07-12 16:02:57.052445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.957 [2024-07-12 16:02:57.052459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.957 [2024-07-12 16:02:57.052472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.957 [2024-07-12 16:02:57.052487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.052517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.052557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.052590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.052619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.052649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.052676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.052701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.052755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.052789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.052820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.052851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.052881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.052915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:51432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.052951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.052972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:51440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.052988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.053319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.053352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.053383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.053410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.053437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.958 [2024-07-12 16:02:57.053463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:51568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:51576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.958 [2024-07-12 16:02:57.053685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.958 [2024-07-12 16:02:57.053699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.053715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.053754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.053769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.053793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.053807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.053823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:51616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.053838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.053853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.053867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.053882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.053897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.053912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.053926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.053942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.053956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.053972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.053986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:51664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:51704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:51720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:51736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:51744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:51768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.959 [2024-07-12 16:02:57.054950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.959 [2024-07-12 16:02:57.054964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.054978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:51928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:51936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:51976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:51984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:51992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:52000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:52016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:52032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:52056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:52064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:52072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:52080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.960 [2024-07-12 16:02:57.055612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.960 [2024-07-12 16:02:57.055638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:52096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:52104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:52112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:52120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:52136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:52144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:52168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.055979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.960 [2024-07-12 16:02:57.055993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.960 [2024-07-12 16:02:57.056012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:52184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.961 [2024-07-12 16:02:57.056040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.961 [2024-07-12 16:02:57.056063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.961 [2024-07-12 16:02:57.056075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.961 [2024-07-12 16:02:57.056089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:52200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.961 [2024-07-12 16:02:57.056117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.961 [2024-07-12 16:02:57.056130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.961 [2024-07-12 16:02:57.056141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.961 [2024-07-12 16:02:57.056154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e6bd70 is same with the state(5) to be set 00:25:59.961 [2024-07-12 16:02:57.056168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.961 [2024-07-12 16:02:57.056184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.961 [2024-07-12 16:02:57.056195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:52216 len:8 PRP1 0x0 PRP2 0x0 00:25:59.961 [2024-07-12 16:02:57.056211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.961 [2024-07-12 16:02:57.056271] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e6bd70 was disconnected and freed. reset controller. 00:25:59.961 [2024-07-12 16:02:57.060311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.961 [2024-07-12 16:02:57.060376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.961 [2024-07-12 16:02:57.060966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.961 [2024-07-12 16:02:57.061005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.961 [2024-07-12 16:02:57.061037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.961 [2024-07-12 16:02:57.061239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.961 [2024-07-12 16:02:57.061427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.961 [2024-07-12 16:02:57.061445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.961 [2024-07-12 16:02:57.061459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.961 [2024-07-12 16:02:57.064409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.961 [2024-07-12 16:02:57.073731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.961 [2024-07-12 16:02:57.074168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.961 [2024-07-12 16:02:57.074194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.961 [2024-07-12 16:02:57.074208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.961 [2024-07-12 16:02:57.074396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.961 [2024-07-12 16:02:57.074582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.961 [2024-07-12 16:02:57.074600] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.961 [2024-07-12 16:02:57.074613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.961 [2024-07-12 16:02:57.077519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.961 [2024-07-12 16:02:57.086848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.961 [2024-07-12 16:02:57.087275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.961 [2024-07-12 16:02:57.087301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.961 [2024-07-12 16:02:57.087315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.961 [2024-07-12 16:02:57.087499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.961 [2024-07-12 16:02:57.087686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.961 [2024-07-12 16:02:57.087706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.961 [2024-07-12 16:02:57.087718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.961 [2024-07-12 16:02:57.090627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.961 [2024-07-12 16:02:57.099885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.961 [2024-07-12 16:02:57.100309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.961 [2024-07-12 16:02:57.100334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.961 [2024-07-12 16:02:57.100348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.961 [2024-07-12 16:02:57.100532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.961 [2024-07-12 16:02:57.100719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.961 [2024-07-12 16:02:57.100762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.961 [2024-07-12 16:02:57.100787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.961 [2024-07-12 16:02:57.103655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.961 [2024-07-12 16:02:57.112937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.961 [2024-07-12 16:02:57.113349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.961 [2024-07-12 16:02:57.113380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.961 [2024-07-12 16:02:57.113394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.961 [2024-07-12 16:02:57.113577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.961 [2024-07-12 16:02:57.113791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.961 [2024-07-12 16:02:57.113811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.961 [2024-07-12 16:02:57.113829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.961 [2024-07-12 16:02:57.116680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.961 [2024-07-12 16:02:57.126095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.961 [2024-07-12 16:02:57.126510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.961 [2024-07-12 16:02:57.126535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.961 [2024-07-12 16:02:57.126549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.961 [2024-07-12 16:02:57.126733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.961 [2024-07-12 16:02:57.126957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.961 [2024-07-12 16:02:57.126977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.961 [2024-07-12 16:02:57.126990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.961 [2024-07-12 16:02:57.129849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.961 [2024-07-12 16:02:57.139242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.961 [2024-07-12 16:02:57.139672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.961 [2024-07-12 16:02:57.139720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.961 [2024-07-12 16:02:57.139734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.961 [2024-07-12 16:02:57.139960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.961 [2024-07-12 16:02:57.140164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.961 [2024-07-12 16:02:57.140185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.961 [2024-07-12 16:02:57.140198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.961 [2024-07-12 16:02:57.142990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.961 [2024-07-12 16:02:57.152331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.961 [2024-07-12 16:02:57.152762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.961 [2024-07-12 16:02:57.152803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.961 [2024-07-12 16:02:57.152817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.961 [2024-07-12 16:02:57.153001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.961 [2024-07-12 16:02:57.153188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.961 [2024-07-12 16:02:57.153208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.961 [2024-07-12 16:02:57.153221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.961 [2024-07-12 16:02:57.156043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.961 [2024-07-12 16:02:57.165341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.961 [2024-07-12 16:02:57.165786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.961 [2024-07-12 16:02:57.165815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.961 [2024-07-12 16:02:57.165830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.961 [2024-07-12 16:02:57.166014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.961 [2024-07-12 16:02:57.166200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.962 [2024-07-12 16:02:57.166219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.962 [2024-07-12 16:02:57.166231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.962 [2024-07-12 16:02:57.169020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.962 [2024-07-12 16:02:57.178539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.962 [2024-07-12 16:02:57.178939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.962 [2024-07-12 16:02:57.178965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.962 [2024-07-12 16:02:57.178980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.962 [2024-07-12 16:02:57.179181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.962 [2024-07-12 16:02:57.179368] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.962 [2024-07-12 16:02:57.179388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.962 [2024-07-12 16:02:57.179401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.962 [2024-07-12 16:02:57.182303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.962 [2024-07-12 16:02:57.191584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.962 [2024-07-12 16:02:57.191980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.962 [2024-07-12 16:02:57.192006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.962 [2024-07-12 16:02:57.192020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.962 [2024-07-12 16:02:57.192204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.962 [2024-07-12 16:02:57.192391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.962 [2024-07-12 16:02:57.192409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.962 [2024-07-12 16:02:57.192421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.962 [2024-07-12 16:02:57.195298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.962 [2024-07-12 16:02:57.204748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.962 [2024-07-12 16:02:57.205123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.962 [2024-07-12 16:02:57.205148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.962 [2024-07-12 16:02:57.205162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.962 [2024-07-12 16:02:57.205346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.962 [2024-07-12 16:02:57.205537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.962 [2024-07-12 16:02:57.205556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.962 [2024-07-12 16:02:57.205568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.962 [2024-07-12 16:02:57.208457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.962 [2024-07-12 16:02:57.217828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.962 [2024-07-12 16:02:57.218235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.962 [2024-07-12 16:02:57.218260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.962 [2024-07-12 16:02:57.218274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.962 [2024-07-12 16:02:57.218458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.962 [2024-07-12 16:02:57.218656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.962 [2024-07-12 16:02:57.218677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.962 [2024-07-12 16:02:57.218689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.962 [2024-07-12 16:02:57.221575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.962 [2024-07-12 16:02:57.230866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.962 [2024-07-12 16:02:57.231306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.962 [2024-07-12 16:02:57.231331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.962 [2024-07-12 16:02:57.231345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.962 [2024-07-12 16:02:57.231529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.962 [2024-07-12 16:02:57.231727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.962 [2024-07-12 16:02:57.231782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.962 [2024-07-12 16:02:57.231796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.962 [2024-07-12 16:02:57.234663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.962 [2024-07-12 16:02:57.243919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.962 [2024-07-12 16:02:57.244331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.962 [2024-07-12 16:02:57.244357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:25:59.962 [2024-07-12 16:02:57.244372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:25:59.962 [2024-07-12 16:02:57.244556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:25:59.962 [2024-07-12 16:02:57.244768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.962 [2024-07-12 16:02:57.244789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.962 [2024-07-12 16:02:57.244803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.962 [2024-07-12 16:02:57.248139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.229 [2024-07-12 16:02:57.257011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.229 [2024-07-12 16:02:57.257491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.229 [2024-07-12 16:02:57.257545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.229 [2024-07-12 16:02:57.257560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.229 [2024-07-12 16:02:57.257797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.229 [2024-07-12 16:02:57.258053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.229 [2024-07-12 16:02:57.258099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.229 [2024-07-12 16:02:57.258138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.229 [2024-07-12 16:02:57.261125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.229 [2024-07-12 16:02:57.270063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.229 [2024-07-12 16:02:57.270487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.229 [2024-07-12 16:02:57.270541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.229 [2024-07-12 16:02:57.270555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.229 [2024-07-12 16:02:57.270750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.229 [2024-07-12 16:02:57.270959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.229 [2024-07-12 16:02:57.270980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.229 [2024-07-12 16:02:57.270993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.229 [2024-07-12 16:02:57.273852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.229 [2024-07-12 16:02:57.283029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.229 [2024-07-12 16:02:57.283447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.229 [2024-07-12 16:02:57.283498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.229 [2024-07-12 16:02:57.283512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.229 [2024-07-12 16:02:57.283696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.229 [2024-07-12 16:02:57.283914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.229 [2024-07-12 16:02:57.283935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.229 [2024-07-12 16:02:57.283947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.229 [2024-07-12 16:02:57.286804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.229 [2024-07-12 16:02:57.296041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.229 [2024-07-12 16:02:57.296441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.229 [2024-07-12 16:02:57.296466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.229 [2024-07-12 16:02:57.296485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.229 [2024-07-12 16:02:57.296671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.229 [2024-07-12 16:02:57.296904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.229 [2024-07-12 16:02:57.296926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.229 [2024-07-12 16:02:57.296941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.229 [2024-07-12 16:02:57.299847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.229 [2024-07-12 16:02:57.309066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.229 [2024-07-12 16:02:57.309498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.229 [2024-07-12 16:02:57.309524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.229 [2024-07-12 16:02:57.309540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.229 [2024-07-12 16:02:57.309734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.229 [2024-07-12 16:02:57.310003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.229 [2024-07-12 16:02:57.310024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.229 [2024-07-12 16:02:57.310038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.229 [2024-07-12 16:02:57.313631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.229 [2024-07-12 16:02:57.322965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.229 [2024-07-12 16:02:57.323426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.229 [2024-07-12 16:02:57.323453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.229 [2024-07-12 16:02:57.323469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.229 [2024-07-12 16:02:57.323683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.229 [2024-07-12 16:02:57.323923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.229 [2024-07-12 16:02:57.323944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.229 [2024-07-12 16:02:57.323957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.229 [2024-07-12 16:02:57.326956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.230 [2024-07-12 16:02:57.336185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.230 [2024-07-12 16:02:57.336597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.230 [2024-07-12 16:02:57.336624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.230 [2024-07-12 16:02:57.336638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.230 [2024-07-12 16:02:57.336874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.230 [2024-07-12 16:02:57.337105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.230 [2024-07-12 16:02:57.337130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.230 [2024-07-12 16:02:57.337144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.230 [2024-07-12 16:02:57.340117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.230 [2024-07-12 16:02:57.349405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.230 [2024-07-12 16:02:57.349780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.230 [2024-07-12 16:02:57.349806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.230 [2024-07-12 16:02:57.349820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.230 [2024-07-12 16:02:57.350009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.230 [2024-07-12 16:02:57.350211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.230 [2024-07-12 16:02:57.350230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.230 [2024-07-12 16:02:57.350242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.230 [2024-07-12 16:02:57.353068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.230 [2024-07-12 16:02:57.362367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.230 [2024-07-12 16:02:57.362723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.230 [2024-07-12 16:02:57.362771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.230 [2024-07-12 16:02:57.362788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.230 [2024-07-12 16:02:57.362998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.230 [2024-07-12 16:02:57.363219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.230 [2024-07-12 16:02:57.363239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.230 [2024-07-12 16:02:57.363252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.230 [2024-07-12 16:02:57.366113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.230 [2024-07-12 16:02:57.375503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.230 [2024-07-12 16:02:57.375860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.230 [2024-07-12 16:02:57.375885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.230 [2024-07-12 16:02:57.375899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.230 [2024-07-12 16:02:57.376083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.230 [2024-07-12 16:02:57.376270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.230 [2024-07-12 16:02:57.376289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.230 [2024-07-12 16:02:57.376300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.230 [2024-07-12 16:02:57.379188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.230 [2024-07-12 16:02:57.388606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.230 [2024-07-12 16:02:57.388984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.230 [2024-07-12 16:02:57.389010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.230 [2024-07-12 16:02:57.389025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.230 [2024-07-12 16:02:57.389209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.230 [2024-07-12 16:02:57.389396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.230 [2024-07-12 16:02:57.389416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.230 [2024-07-12 16:02:57.389429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.230 [2024-07-12 16:02:57.392307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.230 [2024-07-12 16:02:57.401682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.230 [2024-07-12 16:02:57.402084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.230 [2024-07-12 16:02:57.402109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.230 [2024-07-12 16:02:57.402123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.230 [2024-07-12 16:02:57.402306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.230 [2024-07-12 16:02:57.402492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.230 [2024-07-12 16:02:57.402511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.230 [2024-07-12 16:02:57.402524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.230 [2024-07-12 16:02:57.405468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.230 [2024-07-12 16:02:57.414883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.230 [2024-07-12 16:02:57.415305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.230 [2024-07-12 16:02:57.415330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.230 [2024-07-12 16:02:57.415345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.230 [2024-07-12 16:02:57.415528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.230 [2024-07-12 16:02:57.415715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.230 [2024-07-12 16:02:57.415733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.230 [2024-07-12 16:02:57.415772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.230 [2024-07-12 16:02:57.418640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.230 [2024-07-12 16:02:57.428013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.230 [2024-07-12 16:02:57.428414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.230 [2024-07-12 16:02:57.428439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.230 [2024-07-12 16:02:57.428453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.230 [2024-07-12 16:02:57.428642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.230 [2024-07-12 16:02:57.428873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.230 [2024-07-12 16:02:57.428894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.230 [2024-07-12 16:02:57.428907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.230 [2024-07-12 16:02:57.431773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.230 [2024-07-12 16:02:57.441254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.230 [2024-07-12 16:02:57.441625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.230 [2024-07-12 16:02:57.441651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.230 [2024-07-12 16:02:57.441665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.230 [2024-07-12 16:02:57.441900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.230 [2024-07-12 16:02:57.442130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.230 [2024-07-12 16:02:57.442151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.230 [2024-07-12 16:02:57.442163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.230 [2024-07-12 16:02:57.445132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.230 [2024-07-12 16:02:57.454386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.230 [2024-07-12 16:02:57.454814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.230 [2024-07-12 16:02:57.454840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.230 [2024-07-12 16:02:57.454855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.230 [2024-07-12 16:02:57.455059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.230 [2024-07-12 16:02:57.455245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.230 [2024-07-12 16:02:57.455266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.230 [2024-07-12 16:02:57.455279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.230 [2024-07-12 16:02:57.458183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.230 [2024-07-12 16:02:57.467381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.230 [2024-07-12 16:02:57.467746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.230 [2024-07-12 16:02:57.467787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.230 [2024-07-12 16:02:57.467802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.230 [2024-07-12 16:02:57.467992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.230 [2024-07-12 16:02:57.468195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.230 [2024-07-12 16:02:57.468215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.230 [2024-07-12 16:02:57.468233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.231 [2024-07-12 16:02:57.471121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.231 [2024-07-12 16:02:57.480532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.231 [2024-07-12 16:02:57.480897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.231 [2024-07-12 16:02:57.480923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.231 [2024-07-12 16:02:57.480938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.231 [2024-07-12 16:02:57.481122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.231 [2024-07-12 16:02:57.481308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.231 [2024-07-12 16:02:57.481329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.231 [2024-07-12 16:02:57.481341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.231 [2024-07-12 16:02:57.484230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.231 [2024-07-12 16:02:57.493597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.231 [2024-07-12 16:02:57.494006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.231 [2024-07-12 16:02:57.494032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.231 [2024-07-12 16:02:57.494046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.231 [2024-07-12 16:02:57.494231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.231 [2024-07-12 16:02:57.494417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.231 [2024-07-12 16:02:57.494437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.231 [2024-07-12 16:02:57.494450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.231 [2024-07-12 16:02:57.497343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.231 [2024-07-12 16:02:57.506760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.231 [2024-07-12 16:02:57.507166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.231 [2024-07-12 16:02:57.507191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.231 [2024-07-12 16:02:57.507205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.231 [2024-07-12 16:02:57.507388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.231 [2024-07-12 16:02:57.507575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.231 [2024-07-12 16:02:57.507594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.231 [2024-07-12 16:02:57.507606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.231 [2024-07-12 16:02:57.510697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.527 [2024-07-12 16:02:57.520204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.527 [2024-07-12 16:02:57.520590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.527 [2024-07-12 16:02:57.520620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.527 [2024-07-12 16:02:57.520637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.527 [2024-07-12 16:02:57.520886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.527 [2024-07-12 16:02:57.521132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.527 [2024-07-12 16:02:57.521154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.527 [2024-07-12 16:02:57.521168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.527 [2024-07-12 16:02:57.524201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.527 [2024-07-12 16:02:57.533673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.527 [2024-07-12 16:02:57.534102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.527 [2024-07-12 16:02:57.534133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.527 [2024-07-12 16:02:57.534150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.527 [2024-07-12 16:02:57.534352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.527 [2024-07-12 16:02:57.534557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.527 [2024-07-12 16:02:57.534579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.527 [2024-07-12 16:02:57.534594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.527 [2024-07-12 16:02:57.537674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.527 [2024-07-12 16:02:57.546940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.527 [2024-07-12 16:02:57.547380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.527 [2024-07-12 16:02:57.547406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.527 [2024-07-12 16:02:57.547420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.527 [2024-07-12 16:02:57.547604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.527 [2024-07-12 16:02:57.547837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.527 [2024-07-12 16:02:57.547858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.527 [2024-07-12 16:02:57.547871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.527 [2024-07-12 16:02:57.550749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.527 [2024-07-12 16:02:57.559937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.527 [2024-07-12 16:02:57.560372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.527 [2024-07-12 16:02:57.560400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.527 [2024-07-12 16:02:57.560415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.527 [2024-07-12 16:02:57.560646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.527 [2024-07-12 16:02:57.560910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.527 [2024-07-12 16:02:57.560934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.527 [2024-07-12 16:02:57.560949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.527 [2024-07-12 16:02:57.564389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.527 [2024-07-12 16:02:57.573061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.527 [2024-07-12 16:02:57.573482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.527 [2024-07-12 16:02:57.573507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.527 [2024-07-12 16:02:57.573521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.527 [2024-07-12 16:02:57.573705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.527 [2024-07-12 16:02:57.573941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.527 [2024-07-12 16:02:57.573963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.527 [2024-07-12 16:02:57.573977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.527 [2024-07-12 16:02:57.576951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.527 [2024-07-12 16:02:57.586291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.527 [2024-07-12 16:02:57.586690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.528 [2024-07-12 16:02:57.586715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.528 [2024-07-12 16:02:57.586754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.528 [2024-07-12 16:02:57.586965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.528 [2024-07-12 16:02:57.587184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.528 [2024-07-12 16:02:57.587205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.528 [2024-07-12 16:02:57.587218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.528 [2024-07-12 16:02:57.590002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.528 [2024-07-12 16:02:57.599265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.528 [2024-07-12 16:02:57.599657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.528 [2024-07-12 16:02:57.599682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.528 [2024-07-12 16:02:57.599696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.528 [2024-07-12 16:02:57.599910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.528 [2024-07-12 16:02:57.600115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.528 [2024-07-12 16:02:57.600133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.528 [2024-07-12 16:02:57.600150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.528 [2024-07-12 16:02:57.603001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.528 [2024-07-12 16:02:57.612301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.528 [2024-07-12 16:02:57.612687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.528 [2024-07-12 16:02:57.612712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.528 [2024-07-12 16:02:57.612727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.528 [2024-07-12 16:02:57.612939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.528 [2024-07-12 16:02:57.613145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.528 [2024-07-12 16:02:57.613165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.528 [2024-07-12 16:02:57.613178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.528 [2024-07-12 16:02:57.616026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.528 [2024-07-12 16:02:57.625421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.528 [2024-07-12 16:02:57.625829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.528 [2024-07-12 16:02:57.625855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.528 [2024-07-12 16:02:57.625870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.528 [2024-07-12 16:02:57.626055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.528 [2024-07-12 16:02:57.626241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.528 [2024-07-12 16:02:57.626262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.528 [2024-07-12 16:02:57.626274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.528 [2024-07-12 16:02:57.629163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.528 [2024-07-12 16:02:57.638521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.528 [2024-07-12 16:02:57.638926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.528 [2024-07-12 16:02:57.638951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.528 [2024-07-12 16:02:57.638965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.528 [2024-07-12 16:02:57.639149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.528 [2024-07-12 16:02:57.639335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.528 [2024-07-12 16:02:57.639354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.528 [2024-07-12 16:02:57.639366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.528 [2024-07-12 16:02:57.642279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.528 [2024-07-12 16:02:57.651520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.528 [2024-07-12 16:02:57.651901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.528 [2024-07-12 16:02:57.651932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.528 [2024-07-12 16:02:57.651948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.528 [2024-07-12 16:02:57.652132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.528 [2024-07-12 16:02:57.652319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.528 [2024-07-12 16:02:57.652339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.528 [2024-07-12 16:02:57.652352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.528 [2024-07-12 16:02:57.655265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.528 [2024-07-12 16:02:57.664686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.528 [2024-07-12 16:02:57.665093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.528 [2024-07-12 16:02:57.665118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.528 [2024-07-12 16:02:57.665132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.528 [2024-07-12 16:02:57.665316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.528 [2024-07-12 16:02:57.665503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.528 [2024-07-12 16:02:57.665521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.528 [2024-07-12 16:02:57.665534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.528 [2024-07-12 16:02:57.668425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.528 [2024-07-12 16:02:57.677819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.528 [2024-07-12 16:02:57.678228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.528 [2024-07-12 16:02:57.678253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.528 [2024-07-12 16:02:57.678267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.528 [2024-07-12 16:02:57.678450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.528 [2024-07-12 16:02:57.678637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.528 [2024-07-12 16:02:57.678656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.528 [2024-07-12 16:02:57.678668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.528 [2024-07-12 16:02:57.681558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.528 [2024-07-12 16:02:57.690935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.528 [2024-07-12 16:02:57.691341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.528 [2024-07-12 16:02:57.691366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.528 [2024-07-12 16:02:57.691380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.528 [2024-07-12 16:02:57.691564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.528 [2024-07-12 16:02:57.691780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.528 [2024-07-12 16:02:57.691800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.528 [2024-07-12 16:02:57.691828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.528 [2024-07-12 16:02:57.694680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.528 [2024-07-12 16:02:57.704054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.528 [2024-07-12 16:02:57.704449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.528 [2024-07-12 16:02:57.704475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.528 [2024-07-12 16:02:57.704489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.528 [2024-07-12 16:02:57.704674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.528 [2024-07-12 16:02:57.704890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.528 [2024-07-12 16:02:57.704912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.528 [2024-07-12 16:02:57.704925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.528 [2024-07-12 16:02:57.707791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.528 [2024-07-12 16:02:57.717178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.528 [2024-07-12 16:02:57.717572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.528 [2024-07-12 16:02:57.717598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.528 [2024-07-12 16:02:57.717612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.528 [2024-07-12 16:02:57.717825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.528 [2024-07-12 16:02:57.718018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.528 [2024-07-12 16:02:57.718054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.528 [2024-07-12 16:02:57.718066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.528 [2024-07-12 16:02:57.720909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.528 [2024-07-12 16:02:57.730334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.528 [2024-07-12 16:02:57.730749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.529 [2024-07-12 16:02:57.730774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.529 [2024-07-12 16:02:57.730788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.529 [2024-07-12 16:02:57.730972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.529 [2024-07-12 16:02:57.731159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.529 [2024-07-12 16:02:57.731177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.529 [2024-07-12 16:02:57.731190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.529 [2024-07-12 16:02:57.733982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.529 [2024-07-12 16:02:57.743439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.529 [2024-07-12 16:02:57.743842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.529 [2024-07-12 16:02:57.743868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.529 [2024-07-12 16:02:57.743883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.529 [2024-07-12 16:02:57.744067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.529 [2024-07-12 16:02:57.744253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.529 [2024-07-12 16:02:57.744272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.529 [2024-07-12 16:02:57.744284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.529 [2024-07-12 16:02:57.747153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.529 [2024-07-12 16:02:57.756516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.529 [2024-07-12 16:02:57.756933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.529 [2024-07-12 16:02:57.756959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.529 [2024-07-12 16:02:57.756973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.529 [2024-07-12 16:02:57.757156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.529 [2024-07-12 16:02:57.757343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.529 [2024-07-12 16:02:57.757361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.529 [2024-07-12 16:02:57.757374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.529 [2024-07-12 16:02:57.760247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.529 [2024-07-12 16:02:57.769653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.529 [2024-07-12 16:02:57.770076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.529 [2024-07-12 16:02:57.770101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.529 [2024-07-12 16:02:57.770115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.529 [2024-07-12 16:02:57.770299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.529 [2024-07-12 16:02:57.770486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.529 [2024-07-12 16:02:57.770504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.529 [2024-07-12 16:02:57.770516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.529 [2024-07-12 16:02:57.773442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.529 [2024-07-12 16:02:57.782727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.529 [2024-07-12 16:02:57.783079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.529 [2024-07-12 16:02:57.783104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.529 [2024-07-12 16:02:57.783122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.529 [2024-07-12 16:02:57.783307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.529 [2024-07-12 16:02:57.783493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.529 [2024-07-12 16:02:57.783512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.529 [2024-07-12 16:02:57.783525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.529 [2024-07-12 16:02:57.786414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.529 [2024-07-12 16:02:57.795831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.529 [2024-07-12 16:02:57.796225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.529 [2024-07-12 16:02:57.796250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.529 [2024-07-12 16:02:57.796264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.529 [2024-07-12 16:02:57.796448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.529 [2024-07-12 16:02:57.796635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.529 [2024-07-12 16:02:57.796653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.529 [2024-07-12 16:02:57.796665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.529 [2024-07-12 16:02:57.799556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.529 [2024-07-12 16:02:57.808896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.529 [2024-07-12 16:02:57.809261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.529 [2024-07-12 16:02:57.809286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.529 [2024-07-12 16:02:57.809300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.529 [2024-07-12 16:02:57.809483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.529 [2024-07-12 16:02:57.809670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.529 [2024-07-12 16:02:57.809691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.529 [2024-07-12 16:02:57.809704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.529 [2024-07-12 16:02:57.812591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.788 [2024-07-12 16:02:57.822392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.788 [2024-07-12 16:02:57.822781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.788 [2024-07-12 16:02:57.822824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.788 [2024-07-12 16:02:57.822842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.788 [2024-07-12 16:02:57.823066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.788 [2024-07-12 16:02:57.823269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.788 [2024-07-12 16:02:57.823292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.788 [2024-07-12 16:02:57.823305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.788 [2024-07-12 16:02:57.826583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.788 [2024-07-12 16:02:57.835733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.788 [2024-07-12 16:02:57.836072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.788 [2024-07-12 16:02:57.836114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.788 [2024-07-12 16:02:57.836129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.788 [2024-07-12 16:02:57.836313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.788 [2024-07-12 16:02:57.836501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.788 [2024-07-12 16:02:57.836520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.788 [2024-07-12 16:02:57.836532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.788 [2024-07-12 16:02:57.839521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.788 [2024-07-12 16:02:57.848896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.788 [2024-07-12 16:02:57.849206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.788 [2024-07-12 16:02:57.849231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.788 [2024-07-12 16:02:57.849246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.788 [2024-07-12 16:02:57.849429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.788 [2024-07-12 16:02:57.849617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.788 [2024-07-12 16:02:57.849637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.788 [2024-07-12 16:02:57.849649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.788 [2024-07-12 16:02:57.852660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.788 [2024-07-12 16:02:57.862120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.788 [2024-07-12 16:02:57.862431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.788 [2024-07-12 16:02:57.862456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.788 [2024-07-12 16:02:57.862471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.788 [2024-07-12 16:02:57.862654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.788 [2024-07-12 16:02:57.862872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.788 [2024-07-12 16:02:57.862892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.788 [2024-07-12 16:02:57.862905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.788 [2024-07-12 16:02:57.865817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.788 [2024-07-12 16:02:57.875345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.788 [2024-07-12 16:02:57.875680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.788 [2024-07-12 16:02:57.875705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.788 [2024-07-12 16:02:57.875733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.788 [2024-07-12 16:02:57.875939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.788 [2024-07-12 16:02:57.876146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.788 [2024-07-12 16:02:57.876166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.788 [2024-07-12 16:02:57.876179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.788 [2024-07-12 16:02:57.879068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.788 [2024-07-12 16:02:57.888548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.788 [2024-07-12 16:02:57.888879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.788 [2024-07-12 16:02:57.888905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.788 [2024-07-12 16:02:57.888920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.788 [2024-07-12 16:02:57.889128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.788 [2024-07-12 16:02:57.889339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.788 [2024-07-12 16:02:57.889359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.789 [2024-07-12 16:02:57.889372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.789 [2024-07-12 16:02:57.892620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.789 [2024-07-12 16:02:57.902104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.789 [2024-07-12 16:02:57.902515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.789 [2024-07-12 16:02:57.902540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.789 [2024-07-12 16:02:57.902562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.789 [2024-07-12 16:02:57.902801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.789 [2024-07-12 16:02:57.903034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.789 [2024-07-12 16:02:57.903066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.789 [2024-07-12 16:02:57.903095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.789 [2024-07-12 16:02:57.906280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.789 [2024-07-12 16:02:57.915558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.789 [2024-07-12 16:02:57.915891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.789 [2024-07-12 16:02:57.915920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.789 [2024-07-12 16:02:57.915936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.789 [2024-07-12 16:02:57.916188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.789 [2024-07-12 16:02:57.916388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.789 [2024-07-12 16:02:57.916410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.789 [2024-07-12 16:02:57.916424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.789 [2024-07-12 16:02:57.919704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.789 [2024-07-12 16:02:57.928861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.789 [2024-07-12 16:02:57.929236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.789 [2024-07-12 16:02:57.929260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.789 [2024-07-12 16:02:57.929275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.789 [2024-07-12 16:02:57.929458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.789 [2024-07-12 16:02:57.929666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.789 [2024-07-12 16:02:57.929686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.789 [2024-07-12 16:02:57.929698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.789 [2024-07-12 16:02:57.932750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.789 [2024-07-12 16:02:57.942134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.789 [2024-07-12 16:02:57.942466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.789 [2024-07-12 16:02:57.942490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.789 [2024-07-12 16:02:57.942505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.789 [2024-07-12 16:02:57.942688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.789 [2024-07-12 16:02:57.942921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.789 [2024-07-12 16:02:57.942942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.789 [2024-07-12 16:02:57.942956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.789 [2024-07-12 16:02:57.945899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.789 [2024-07-12 16:02:57.955363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.789 [2024-07-12 16:02:57.955677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.789 [2024-07-12 16:02:57.955703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.789 [2024-07-12 16:02:57.955718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.789 [2024-07-12 16:02:57.955941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.789 [2024-07-12 16:02:57.956150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.789 [2024-07-12 16:02:57.956169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.789 [2024-07-12 16:02:57.956186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.789 [2024-07-12 16:02:57.959087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.789 [2024-07-12 16:02:57.968492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.789 [2024-07-12 16:02:57.968840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.789 [2024-07-12 16:02:57.968865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.789 [2024-07-12 16:02:57.968879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.789 [2024-07-12 16:02:57.969062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.789 [2024-07-12 16:02:57.969250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.789 [2024-07-12 16:02:57.969269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.789 [2024-07-12 16:02:57.969282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.789 [2024-07-12 16:02:57.972181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.789 [2024-07-12 16:02:57.981627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.789 [2024-07-12 16:02:57.981985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.789 [2024-07-12 16:02:57.982010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.789 [2024-07-12 16:02:57.982025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.789 [2024-07-12 16:02:57.982223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.789 [2024-07-12 16:02:57.982411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.789 [2024-07-12 16:02:57.982430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.789 [2024-07-12 16:02:57.982442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.789 [2024-07-12 16:02:57.985357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.789 [2024-07-12 16:02:57.994713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.789 [2024-07-12 16:02:57.995071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.789 [2024-07-12 16:02:57.995096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.789 [2024-07-12 16:02:57.995110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.789 [2024-07-12 16:02:57.995293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.789 [2024-07-12 16:02:57.995480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.789 [2024-07-12 16:02:57.995499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.789 [2024-07-12 16:02:57.995512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.789 [2024-07-12 16:02:57.998416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.789 [2024-07-12 16:02:58.008096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.789 [2024-07-12 16:02:58.008410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.789 [2024-07-12 16:02:58.008435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.789 [2024-07-12 16:02:58.008449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.789 [2024-07-12 16:02:58.008632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.789 [2024-07-12 16:02:58.008857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.789 [2024-07-12 16:02:58.008878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.789 [2024-07-12 16:02:58.008892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.789 [2024-07-12 16:02:58.011946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.789 [2024-07-12 16:02:58.021701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.789 [2024-07-12 16:02:58.022087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.789 [2024-07-12 16:02:58.022115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.789 [2024-07-12 16:02:58.022140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.789 [2024-07-12 16:02:58.022346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.789 [2024-07-12 16:02:58.022558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.789 [2024-07-12 16:02:58.022579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.789 [2024-07-12 16:02:58.022593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.789 [2024-07-12 16:02:58.025807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.789 [2024-07-12 16:02:58.034956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.789 [2024-07-12 16:02:58.035353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.789 [2024-07-12 16:02:58.035379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.789 [2024-07-12 16:02:58.035394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.789 [2024-07-12 16:02:58.035582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.789 [2024-07-12 16:02:58.035804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.789 [2024-07-12 16:02:58.035825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.790 [2024-07-12 16:02:58.035838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.790 [2024-07-12 16:02:58.038817] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.790 [2024-07-12 16:02:58.048170] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.790 [2024-07-12 16:02:58.048542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.790 [2024-07-12 16:02:58.048568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.790 [2024-07-12 16:02:58.048582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.790 [2024-07-12 16:02:58.048797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.790 [2024-07-12 16:02:58.049024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.790 [2024-07-12 16:02:58.049046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.790 [2024-07-12 16:02:58.049074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.790 [2024-07-12 16:02:58.052010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.790 [2024-07-12 16:02:58.061499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.790 [2024-07-12 16:02:58.061920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.790 [2024-07-12 16:02:58.061948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.790 [2024-07-12 16:02:58.061965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.790 [2024-07-12 16:02:58.062201] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.790 [2024-07-12 16:02:58.062398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.790 [2024-07-12 16:02:58.062418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.790 [2024-07-12 16:02:58.062431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.790 [2024-07-12 16:02:58.065457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:00.790 [2024-07-12 16:02:58.075012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:00.790 [2024-07-12 16:02:58.075442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:00.790 [2024-07-12 16:02:58.075468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:00.790 [2024-07-12 16:02:58.075482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:00.790 [2024-07-12 16:02:58.075672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:00.790 [2024-07-12 16:02:58.075913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:00.790 [2024-07-12 16:02:58.075934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:00.790 [2024-07-12 16:02:58.075948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:00.790 [2024-07-12 16:02:58.079504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.049 [2024-07-12 16:02:58.088415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.049 [2024-07-12 16:02:58.088797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.049 [2024-07-12 16:02:58.088830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.049 [2024-07-12 16:02:58.088846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.049 [2024-07-12 16:02:58.089063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.049 [2024-07-12 16:02:58.089271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.049 [2024-07-12 16:02:58.089292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.049 [2024-07-12 16:02:58.089305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.049 [2024-07-12 16:02:58.092295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.049 [2024-07-12 16:02:58.101711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.049 [2024-07-12 16:02:58.102130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.049 [2024-07-12 16:02:58.102156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.049 [2024-07-12 16:02:58.102170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.049 [2024-07-12 16:02:58.102359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.049 [2024-07-12 16:02:58.102551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.049 [2024-07-12 16:02:58.102572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.049 [2024-07-12 16:02:58.102585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.049 [2024-07-12 16:02:58.105530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.049 [2024-07-12 16:02:58.115018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.049 [2024-07-12 16:02:58.115446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.049 [2024-07-12 16:02:58.115472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.049 [2024-07-12 16:02:58.115489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.049 [2024-07-12 16:02:58.115678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.049 [2024-07-12 16:02:58.115918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.049 [2024-07-12 16:02:58.115940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.049 [2024-07-12 16:02:58.115954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.049 [2024-07-12 16:02:58.118913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.049 [2024-07-12 16:02:58.128332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.049 [2024-07-12 16:02:58.128757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.049 [2024-07-12 16:02:58.128784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.049 [2024-07-12 16:02:58.128809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.049 [2024-07-12 16:02:58.129004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.049 [2024-07-12 16:02:58.129211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.049 [2024-07-12 16:02:58.129230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.049 [2024-07-12 16:02:58.129243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.049 [2024-07-12 16:02:58.132237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.049 [2024-07-12 16:02:58.141659] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.049 [2024-07-12 16:02:58.142101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.049 [2024-07-12 16:02:58.142137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.049 [2024-07-12 16:02:58.142156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.049 [2024-07-12 16:02:58.142346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.049 [2024-07-12 16:02:58.142539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.049 [2024-07-12 16:02:58.142560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.049 [2024-07-12 16:02:58.142573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.049 [2024-07-12 16:02:58.145561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.049 [2024-07-12 16:02:58.154965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.049 [2024-07-12 16:02:58.155366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.049 [2024-07-12 16:02:58.155392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.049 [2024-07-12 16:02:58.155407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.049 [2024-07-12 16:02:58.155596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.049 [2024-07-12 16:02:58.155835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.049 [2024-07-12 16:02:58.155859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.049 [2024-07-12 16:02:58.155874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.049 [2024-07-12 16:02:58.158849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.049 [2024-07-12 16:02:58.168264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.049 [2024-07-12 16:02:58.168673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.049 [2024-07-12 16:02:58.168698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.049 [2024-07-12 16:02:58.168713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.049 [2024-07-12 16:02:58.168951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.049 [2024-07-12 16:02:58.169181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.049 [2024-07-12 16:02:58.169202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.049 [2024-07-12 16:02:58.169216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.049 [2024-07-12 16:02:58.172158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.049 [2024-07-12 16:02:58.181547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.049 [2024-07-12 16:02:58.182000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.049 [2024-07-12 16:02:58.182027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.049 [2024-07-12 16:02:58.182043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.049 [2024-07-12 16:02:58.182266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.049 [2024-07-12 16:02:58.182463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.049 [2024-07-12 16:02:58.182482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.049 [2024-07-12 16:02:58.182495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.049 [2024-07-12 16:02:58.185450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.049 [2024-07-12 16:02:58.194808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.049 [2024-07-12 16:02:58.195249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.049 [2024-07-12 16:02:58.195274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.049 [2024-07-12 16:02:58.195289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.049 [2024-07-12 16:02:58.195477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.049 [2024-07-12 16:02:58.195669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.050 [2024-07-12 16:02:58.195688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.050 [2024-07-12 16:02:58.195700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.050 [2024-07-12 16:02:58.198669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.050 [2024-07-12 16:02:58.208141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.050 [2024-07-12 16:02:58.208558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.050 [2024-07-12 16:02:58.208583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.050 [2024-07-12 16:02:58.208598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.050 [2024-07-12 16:02:58.208827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.050 [2024-07-12 16:02:58.209034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.050 [2024-07-12 16:02:58.209056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.050 [2024-07-12 16:02:58.209070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.050 [2024-07-12 16:02:58.212016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.050 [2024-07-12 16:02:58.221423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.050 [2024-07-12 16:02:58.221794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.050 [2024-07-12 16:02:58.221821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.050 [2024-07-12 16:02:58.221836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.050 [2024-07-12 16:02:58.222030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.050 [2024-07-12 16:02:58.222239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.050 [2024-07-12 16:02:58.222261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.050 [2024-07-12 16:02:58.222275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.050 [2024-07-12 16:02:58.225290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.050 [2024-07-12 16:02:58.234686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.050 [2024-07-12 16:02:58.235121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.050 [2024-07-12 16:02:58.235147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.050 [2024-07-12 16:02:58.235162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.050 [2024-07-12 16:02:58.235350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.050 [2024-07-12 16:02:58.235542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.050 [2024-07-12 16:02:58.235562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.050 [2024-07-12 16:02:58.235574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.050 [2024-07-12 16:02:58.238561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.050 [2024-07-12 16:02:58.247987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.050 [2024-07-12 16:02:58.248388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.050 [2024-07-12 16:02:58.248413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.050 [2024-07-12 16:02:58.248427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.050 [2024-07-12 16:02:58.248616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.050 [2024-07-12 16:02:58.248852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.050 [2024-07-12 16:02:58.248875] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.050 [2024-07-12 16:02:58.248889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.050 [2024-07-12 16:02:58.251852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.050 [2024-07-12 16:02:58.261293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.050 [2024-07-12 16:02:58.261704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.050 [2024-07-12 16:02:58.261729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.050 [2024-07-12 16:02:58.261767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.050 [2024-07-12 16:02:58.261984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.050 [2024-07-12 16:02:58.262202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.050 [2024-07-12 16:02:58.262222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.050 [2024-07-12 16:02:58.262236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.050 [2024-07-12 16:02:58.265178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.050 [2024-07-12 16:02:58.274591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.050 [2024-07-12 16:02:58.274994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.050 [2024-07-12 16:02:58.275020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.050 [2024-07-12 16:02:58.275062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.050 [2024-07-12 16:02:58.275253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.050 [2024-07-12 16:02:58.275445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.050 [2024-07-12 16:02:58.275466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.050 [2024-07-12 16:02:58.275479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.050 [2024-07-12 16:02:58.278475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.050 [2024-07-12 16:02:58.287914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.050 [2024-07-12 16:02:58.288335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.050 [2024-07-12 16:02:58.288359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.050 [2024-07-12 16:02:58.288374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.050 [2024-07-12 16:02:58.288562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.050 [2024-07-12 16:02:58.288781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.050 [2024-07-12 16:02:58.288816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.050 [2024-07-12 16:02:58.288830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.050 [2024-07-12 16:02:58.291801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.050 [2024-07-12 16:02:58.301182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.050 [2024-07-12 16:02:58.301589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.050 [2024-07-12 16:02:58.301614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.050 [2024-07-12 16:02:58.301629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.050 [2024-07-12 16:02:58.301863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.050 [2024-07-12 16:02:58.302082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.050 [2024-07-12 16:02:58.302118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.050 [2024-07-12 16:02:58.302131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.050 [2024-07-12 16:02:58.305075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.050 [2024-07-12 16:02:58.314469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.050 [2024-07-12 16:02:58.314862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.050 [2024-07-12 16:02:58.314888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.050 [2024-07-12 16:02:58.314903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.050 [2024-07-12 16:02:58.315114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.050 [2024-07-12 16:02:58.315307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.050 [2024-07-12 16:02:58.315332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.050 [2024-07-12 16:02:58.315345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.050 [2024-07-12 16:02:58.318338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.050 [2024-07-12 16:02:58.327950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.050 [2024-07-12 16:02:58.328366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.050 [2024-07-12 16:02:58.328391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.050 [2024-07-12 16:02:58.328405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.050 [2024-07-12 16:02:58.328594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.050 [2024-07-12 16:02:58.328818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.050 [2024-07-12 16:02:58.328839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.050 [2024-07-12 16:02:58.328853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.050 [2024-07-12 16:02:58.331860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.309 [2024-07-12 16:02:58.341715] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.309 [2024-07-12 16:02:58.342180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.309 [2024-07-12 16:02:58.342209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.309 [2024-07-12 16:02:58.342225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.309 [2024-07-12 16:02:58.342435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.309 [2024-07-12 16:02:58.342646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.309 [2024-07-12 16:02:58.342668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.309 [2024-07-12 16:02:58.342681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.309 [2024-07-12 16:02:58.345782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.309 [2024-07-12 16:02:58.354927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.309 [2024-07-12 16:02:58.355323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.310 [2024-07-12 16:02:58.355351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.310 [2024-07-12 16:02:58.355366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.310 [2024-07-12 16:02:58.355556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.310 [2024-07-12 16:02:58.355775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.310 [2024-07-12 16:02:58.355812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.310 [2024-07-12 16:02:58.355826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.310 [2024-07-12 16:02:58.358797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.310 [2024-07-12 16:02:58.368188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.310 [2024-07-12 16:02:58.368551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.310 [2024-07-12 16:02:58.368577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.310 [2024-07-12 16:02:58.368592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.310 [2024-07-12 16:02:58.368825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.310 [2024-07-12 16:02:58.369046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.310 [2024-07-12 16:02:58.369067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.310 [2024-07-12 16:02:58.369081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.310 [2024-07-12 16:02:58.372039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.310 [2024-07-12 16:02:58.381410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.310 [2024-07-12 16:02:58.381810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.310 [2024-07-12 16:02:58.381837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.310 [2024-07-12 16:02:58.381852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.310 [2024-07-12 16:02:58.382047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.310 [2024-07-12 16:02:58.382255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.310 [2024-07-12 16:02:58.382276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.310 [2024-07-12 16:02:58.382288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.310 [2024-07-12 16:02:58.385240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.310 [2024-07-12 16:02:58.394669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.310 [2024-07-12 16:02:58.395078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.310 [2024-07-12 16:02:58.395104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.310 [2024-07-12 16:02:58.395118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.310 [2024-07-12 16:02:58.395307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.310 [2024-07-12 16:02:58.395504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.310 [2024-07-12 16:02:58.395525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.310 [2024-07-12 16:02:58.395538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.310 [2024-07-12 16:02:58.398507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.310 [2024-07-12 16:02:58.407903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.310 [2024-07-12 16:02:58.408336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.310 [2024-07-12 16:02:58.408361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.310 [2024-07-12 16:02:58.408376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.310 [2024-07-12 16:02:58.408569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.310 [2024-07-12 16:02:58.408789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.310 [2024-07-12 16:02:58.408826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.310 [2024-07-12 16:02:58.408841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.310 [2024-07-12 16:02:58.411811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.310 [2024-07-12 16:02:58.421160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.310 [2024-07-12 16:02:58.421513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.310 [2024-07-12 16:02:58.421538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.310 [2024-07-12 16:02:58.421552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.310 [2024-07-12 16:02:58.421769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.310 [2024-07-12 16:02:58.421988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.310 [2024-07-12 16:02:58.422010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.310 [2024-07-12 16:02:58.422023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.310 [2024-07-12 16:02:58.425017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.310 [2024-07-12 16:02:58.434413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.310 [2024-07-12 16:02:58.434787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.310 [2024-07-12 16:02:58.434823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.310 [2024-07-12 16:02:58.434838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.310 [2024-07-12 16:02:58.435049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.310 [2024-07-12 16:02:58.435241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.310 [2024-07-12 16:02:58.435262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.310 [2024-07-12 16:02:58.435276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.310 [2024-07-12 16:02:58.438408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.310 [2024-07-12 16:02:58.447660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.310 [2024-07-12 16:02:58.448107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.310 [2024-07-12 16:02:58.448133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.310 [2024-07-12 16:02:58.448147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.310 [2024-07-12 16:02:58.448336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.310 [2024-07-12 16:02:58.448528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.310 [2024-07-12 16:02:58.448550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.310 [2024-07-12 16:02:58.448567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.311 [2024-07-12 16:02:58.451531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.311 [2024-07-12 16:02:58.460970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.311 [2024-07-12 16:02:58.461351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.311 [2024-07-12 16:02:58.461378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.311 [2024-07-12 16:02:58.461393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.311 [2024-07-12 16:02:58.461583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.311 [2024-07-12 16:02:58.461818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.311 [2024-07-12 16:02:58.461841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.311 [2024-07-12 16:02:58.461856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.311 [2024-07-12 16:02:58.464820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.311 [2024-07-12 16:02:58.474210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.311 [2024-07-12 16:02:58.474622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.311 [2024-07-12 16:02:58.474648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.311 [2024-07-12 16:02:58.474663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.311 [2024-07-12 16:02:58.474900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.311 [2024-07-12 16:02:58.475132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.311 [2024-07-12 16:02:58.475153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.311 [2024-07-12 16:02:58.475166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.311 [2024-07-12 16:02:58.478133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.311 [2024-07-12 16:02:58.487365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.311 [2024-07-12 16:02:58.487757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.311 [2024-07-12 16:02:58.487795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.311 [2024-07-12 16:02:58.487811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.311 [2024-07-12 16:02:58.488006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.311 [2024-07-12 16:02:58.488214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.311 [2024-07-12 16:02:58.488235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.311 [2024-07-12 16:02:58.488248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.311 [2024-07-12 16:02:58.491234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.311 [2024-07-12 16:02:58.500602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.311 [2024-07-12 16:02:58.501049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.311 [2024-07-12 16:02:58.501079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.311 [2024-07-12 16:02:58.501094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.311 [2024-07-12 16:02:58.501283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.311 [2024-07-12 16:02:58.501475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.311 [2024-07-12 16:02:58.501495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.311 [2024-07-12 16:02:58.501507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.311 [2024-07-12 16:02:58.504530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.311 [2024-07-12 16:02:58.513832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.311 [2024-07-12 16:02:58.514267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.311 [2024-07-12 16:02:58.514292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.311 [2024-07-12 16:02:58.514306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.311 [2024-07-12 16:02:58.514494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.311 [2024-07-12 16:02:58.514687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.311 [2024-07-12 16:02:58.514706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.311 [2024-07-12 16:02:58.514733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.311 [2024-07-12 16:02:58.517703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.311 [2024-07-12 16:02:58.527119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.311 [2024-07-12 16:02:58.527499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.311 [2024-07-12 16:02:58.527524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.311 [2024-07-12 16:02:58.527539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.311 [2024-07-12 16:02:58.527752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.311 [2024-07-12 16:02:58.527972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.311 [2024-07-12 16:02:58.527995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.311 [2024-07-12 16:02:58.528009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.311 [2024-07-12 16:02:58.530970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.311 [2024-07-12 16:02:58.540373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.311 [2024-07-12 16:02:58.540753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.311 [2024-07-12 16:02:58.540789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.311 [2024-07-12 16:02:58.540804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.311 [2024-07-12 16:02:58.540993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.311 [2024-07-12 16:02:58.541189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.311 [2024-07-12 16:02:58.541210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.311 [2024-07-12 16:02:58.541223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.311 [2024-07-12 16:02:58.544213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.311 [2024-07-12 16:02:58.553561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.311 [2024-07-12 16:02:58.554009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.311 [2024-07-12 16:02:58.554036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.311 [2024-07-12 16:02:58.554073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.311 [2024-07-12 16:02:58.554262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.312 [2024-07-12 16:02:58.554455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.312 [2024-07-12 16:02:58.554474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.312 [2024-07-12 16:02:58.554487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.312 [2024-07-12 16:02:58.557480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.312 [2024-07-12 16:02:58.566889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.312 [2024-07-12 16:02:58.567284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.312 [2024-07-12 16:02:58.567316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.312 [2024-07-12 16:02:58.567330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.312 [2024-07-12 16:02:58.567519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.312 [2024-07-12 16:02:58.567711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.312 [2024-07-12 16:02:58.567730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.312 [2024-07-12 16:02:58.567767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.312 [2024-07-12 16:02:58.570753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.312 [2024-07-12 16:02:58.580278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.312 [2024-07-12 16:02:58.580682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.312 [2024-07-12 16:02:58.580709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.312 [2024-07-12 16:02:58.580745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.312 [2024-07-12 16:02:58.580970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.312 [2024-07-12 16:02:58.581200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.312 [2024-07-12 16:02:58.581220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.312 [2024-07-12 16:02:58.581234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.312 [2024-07-12 16:02:58.584305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.312 [2024-07-12 16:02:58.593661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.312 [2024-07-12 16:02:58.594102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.312 [2024-07-12 16:02:58.594129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.312 [2024-07-12 16:02:58.594144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.312 [2024-07-12 16:02:58.594333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.312 [2024-07-12 16:02:58.594525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.312 [2024-07-12 16:02:58.594546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.312 [2024-07-12 16:02:58.594559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.312 [2024-07-12 16:02:58.597567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.571 [2024-07-12 16:02:58.606954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.571 [2024-07-12 16:02:58.607378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.571 [2024-07-12 16:02:58.607408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.571 [2024-07-12 16:02:58.607424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.571 [2024-07-12 16:02:58.607664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.571 [2024-07-12 16:02:58.607972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.571 [2024-07-12 16:02:58.607998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.571 [2024-07-12 16:02:58.608013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.571 [2024-07-12 16:02:58.611186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.571 [2024-07-12 16:02:58.620256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.571 [2024-07-12 16:02:58.620631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.571 [2024-07-12 16:02:58.620658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.571 [2024-07-12 16:02:58.620673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.571 [2024-07-12 16:02:58.620910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.571 [2024-07-12 16:02:58.621131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.571 [2024-07-12 16:02:58.621153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.571 [2024-07-12 16:02:58.621166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.571 [2024-07-12 16:02:58.624163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.571 [2024-07-12 16:02:58.633418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.571 [2024-07-12 16:02:58.633801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.571 [2024-07-12 16:02:58.633828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.571 [2024-07-12 16:02:58.633848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.571 [2024-07-12 16:02:58.634059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.571 [2024-07-12 16:02:58.634253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.571 [2024-07-12 16:02:58.634274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.571 [2024-07-12 16:02:58.634288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.571 [2024-07-12 16:02:58.637258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.571 [2024-07-12 16:02:58.646656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.571 [2024-07-12 16:02:58.647099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.571 [2024-07-12 16:02:58.647125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.571 [2024-07-12 16:02:58.647139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.571 [2024-07-12 16:02:58.647329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.571 [2024-07-12 16:02:58.647520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.571 [2024-07-12 16:02:58.647540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.571 [2024-07-12 16:02:58.647552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.571 [2024-07-12 16:02:58.650562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.571 [2024-07-12 16:02:58.660000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.571 [2024-07-12 16:02:58.660401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.571 [2024-07-12 16:02:58.660426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.571 [2024-07-12 16:02:58.660441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.571 [2024-07-12 16:02:58.660629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.571 [2024-07-12 16:02:58.660867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.571 [2024-07-12 16:02:58.660888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.571 [2024-07-12 16:02:58.660901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.571 [2024-07-12 16:02:58.663862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.571 [2024-07-12 16:02:58.673231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.571 [2024-07-12 16:02:58.673656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.571 [2024-07-12 16:02:58.673682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.571 [2024-07-12 16:02:58.673697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.571 [2024-07-12 16:02:58.673918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.571 [2024-07-12 16:02:58.674135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.571 [2024-07-12 16:02:58.674160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.571 [2024-07-12 16:02:58.674173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.571 [2024-07-12 16:02:58.677117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.571 [2024-07-12 16:02:58.686499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.571 [2024-07-12 16:02:58.686881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.571 [2024-07-12 16:02:58.686907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.571 [2024-07-12 16:02:58.686923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.571 [2024-07-12 16:02:58.687130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.571 [2024-07-12 16:02:58.687335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.571 [2024-07-12 16:02:58.687356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.571 [2024-07-12 16:02:58.687369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.571 [2024-07-12 16:02:58.690341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.571 [2024-07-12 16:02:58.699714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.571 [2024-07-12 16:02:58.700166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.571 [2024-07-12 16:02:58.700192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.571 [2024-07-12 16:02:58.700206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.571 [2024-07-12 16:02:58.700395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.571 [2024-07-12 16:02:58.700587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.571 [2024-07-12 16:02:58.700606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.571 [2024-07-12 16:02:58.700619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.571 [2024-07-12 16:02:58.703589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.571 [2024-07-12 16:02:58.713042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.571 [2024-07-12 16:02:58.713393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.571 [2024-07-12 16:02:58.713419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.571 [2024-07-12 16:02:58.713433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.571 [2024-07-12 16:02:58.713623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.571 [2024-07-12 16:02:58.713861] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.571 [2024-07-12 16:02:58.713883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.571 [2024-07-12 16:02:58.713898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.571 [2024-07-12 16:02:58.716859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.571 [2024-07-12 16:02:58.726283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.571 [2024-07-12 16:02:58.726633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.571 [2024-07-12 16:02:58.726659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.572 [2024-07-12 16:02:58.726673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.572 [2024-07-12 16:02:58.726916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.572 [2024-07-12 16:02:58.727149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.572 [2024-07-12 16:02:58.727169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.572 [2024-07-12 16:02:58.727183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.572 [2024-07-12 16:02:58.730126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.572 [2024-07-12 16:02:58.739496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.572 [2024-07-12 16:02:58.739918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.572 [2024-07-12 16:02:58.739944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.572 [2024-07-12 16:02:58.739959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.572 [2024-07-12 16:02:58.740164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.572 [2024-07-12 16:02:58.740358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.572 [2024-07-12 16:02:58.740380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.572 [2024-07-12 16:02:58.740393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.572 [2024-07-12 16:02:58.743417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.572 [2024-07-12 16:02:58.752810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.572 [2024-07-12 16:02:58.753257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.572 [2024-07-12 16:02:58.753282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.572 [2024-07-12 16:02:58.753296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.572 [2024-07-12 16:02:58.753491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.572 [2024-07-12 16:02:58.753682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.572 [2024-07-12 16:02:58.753701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.572 [2024-07-12 16:02:58.753714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.572 [2024-07-12 16:02:58.756712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.572 [2024-07-12 16:02:58.766154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.572 [2024-07-12 16:02:58.766571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.572 [2024-07-12 16:02:58.766598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.572 [2024-07-12 16:02:58.766618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.572 [2024-07-12 16:02:58.766846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.572 [2024-07-12 16:02:58.767075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.572 [2024-07-12 16:02:58.767112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.572 [2024-07-12 16:02:58.767125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.572 [2024-07-12 16:02:58.770156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.572 [2024-07-12 16:02:58.779442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.572 [2024-07-12 16:02:58.779847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.572 [2024-07-12 16:02:58.779874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.572 [2024-07-12 16:02:58.779889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.572 [2024-07-12 16:02:58.780097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.572 [2024-07-12 16:02:58.780289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.572 [2024-07-12 16:02:58.780309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.572 [2024-07-12 16:02:58.780321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.572 [2024-07-12 16:02:58.783392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.572 [2024-07-12 16:02:58.792706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.572 [2024-07-12 16:02:58.793105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.572 [2024-07-12 16:02:58.793132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.572 [2024-07-12 16:02:58.793147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.572 [2024-07-12 16:02:58.793337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.572 [2024-07-12 16:02:58.793529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.572 [2024-07-12 16:02:58.793550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.572 [2024-07-12 16:02:58.793563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.572 [2024-07-12 16:02:58.796555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.572 [2024-07-12 16:02:58.805964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.572 [2024-07-12 16:02:58.806389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.572 [2024-07-12 16:02:58.806415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.572 [2024-07-12 16:02:58.806430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.572 [2024-07-12 16:02:58.806620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.572 [2024-07-12 16:02:58.806859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.572 [2024-07-12 16:02:58.806887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.572 [2024-07-12 16:02:58.806902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.572 [2024-07-12 16:02:58.809872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.572 [2024-07-12 16:02:58.819195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.572 [2024-07-12 16:02:58.819601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.572 [2024-07-12 16:02:58.819627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.572 [2024-07-12 16:02:58.819642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.572 [2024-07-12 16:02:58.819875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.572 [2024-07-12 16:02:58.820080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.572 [2024-07-12 16:02:58.820101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.572 [2024-07-12 16:02:58.820114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.572 [2024-07-12 16:02:58.823052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.572 [2024-07-12 16:02:58.832610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.572 [2024-07-12 16:02:58.833027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.572 [2024-07-12 16:02:58.833067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.572 [2024-07-12 16:02:58.833082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.572 [2024-07-12 16:02:58.833270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.572 [2024-07-12 16:02:58.833464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.572 [2024-07-12 16:02:58.833485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.572 [2024-07-12 16:02:58.833498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.572 [2024-07-12 16:02:58.836549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.572 [2024-07-12 16:02:58.845917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.572 [2024-07-12 16:02:58.846346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.572 [2024-07-12 16:02:58.846371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.572 [2024-07-12 16:02:58.846386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.572 [2024-07-12 16:02:58.846574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.572 [2024-07-12 16:02:58.846810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.572 [2024-07-12 16:02:58.846832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.572 [2024-07-12 16:02:58.846846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.572 [2024-07-12 16:02:58.849769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.572 [2024-07-12 16:02:58.859216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.572 [2024-07-12 16:02:58.859673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.572 [2024-07-12 16:02:58.859712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.572 [2024-07-12 16:02:58.859765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.572 [2024-07-12 16:02:58.860029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.572 [2024-07-12 16:02:58.860270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.572 [2024-07-12 16:02:58.860294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.572 [2024-07-12 16:02:58.860308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.830 [2024-07-12 16:02:58.863713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.830 [2024-07-12 16:02:58.872537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.830 [2024-07-12 16:02:58.872938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.830 [2024-07-12 16:02:58.872967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.830 [2024-07-12 16:02:58.872982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.830 [2024-07-12 16:02:58.873200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.830 [2024-07-12 16:02:58.873387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.830 [2024-07-12 16:02:58.873406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.830 [2024-07-12 16:02:58.873419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.830 [2024-07-12 16:02:58.876385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.831 [2024-07-12 16:02:58.885875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.831 [2024-07-12 16:02:58.886342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.831 [2024-07-12 16:02:58.886393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.831 [2024-07-12 16:02:58.886407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.831 [2024-07-12 16:02:58.886590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.831 [2024-07-12 16:02:58.886807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.831 [2024-07-12 16:02:58.886828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.831 [2024-07-12 16:02:58.886841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.831 [2024-07-12 16:02:58.889761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.831 [2024-07-12 16:02:58.899033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.831 [2024-07-12 16:02:58.899448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.831 [2024-07-12 16:02:58.899475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.831 [2024-07-12 16:02:58.899490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.831 [2024-07-12 16:02:58.899681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.831 [2024-07-12 16:02:58.899898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.831 [2024-07-12 16:02:58.899919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.831 [2024-07-12 16:02:58.899931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.831 [2024-07-12 16:02:58.902795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.831 [2024-07-12 16:02:58.912272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.831 [2024-07-12 16:02:58.912663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.831 [2024-07-12 16:02:58.912689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.831 [2024-07-12 16:02:58.912702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.831 [2024-07-12 16:02:58.912914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.831 [2024-07-12 16:02:58.913120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.831 [2024-07-12 16:02:58.913141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.831 [2024-07-12 16:02:58.913153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.831 [2024-07-12 16:02:58.916001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.831 [2024-07-12 16:02:58.925474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.831 [2024-07-12 16:02:58.925850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.831 [2024-07-12 16:02:58.925902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.831 [2024-07-12 16:02:58.925917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.831 [2024-07-12 16:02:58.926123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.831 [2024-07-12 16:02:58.926310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.831 [2024-07-12 16:02:58.926329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.831 [2024-07-12 16:02:58.926341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.831 [2024-07-12 16:02:58.929228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.831 [2024-07-12 16:02:58.938614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.831 [2024-07-12 16:02:58.939075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.831 [2024-07-12 16:02:58.939133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.831 [2024-07-12 16:02:58.939147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.831 [2024-07-12 16:02:58.939331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.831 [2024-07-12 16:02:58.939517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.831 [2024-07-12 16:02:58.939536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.831 [2024-07-12 16:02:58.939553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.831 [2024-07-12 16:02:58.942464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.831 [2024-07-12 16:02:58.951966] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.831 [2024-07-12 16:02:58.952424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.831 [2024-07-12 16:02:58.952476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.831 [2024-07-12 16:02:58.952491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.831 [2024-07-12 16:02:58.952679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.831 [2024-07-12 16:02:58.952920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.831 [2024-07-12 16:02:58.952942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.831 [2024-07-12 16:02:58.952955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.831 [2024-07-12 16:02:58.955929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.831 [2024-07-12 16:02:58.965224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.831 [2024-07-12 16:02:58.965559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.831 [2024-07-12 16:02:58.965584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.831 [2024-07-12 16:02:58.965598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.831 [2024-07-12 16:02:58.965807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.831 [2024-07-12 16:02:58.966011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.831 [2024-07-12 16:02:58.966031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.831 [2024-07-12 16:02:58.966043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.831 [2024-07-12 16:02:58.968926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.831 [2024-07-12 16:02:58.978389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.831 [2024-07-12 16:02:58.978731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.831 [2024-07-12 16:02:58.978779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.831 [2024-07-12 16:02:58.978794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.831 [2024-07-12 16:02:58.978983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.831 [2024-07-12 16:02:58.979192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.831 [2024-07-12 16:02:58.979212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.831 [2024-07-12 16:02:58.979228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.831 [2024-07-12 16:02:58.982132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.831 [2024-07-12 16:02:58.991929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.831 [2024-07-12 16:02:58.992306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.831 [2024-07-12 16:02:58.992357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.831 [2024-07-12 16:02:58.992373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.831 [2024-07-12 16:02:58.992586] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.831 [2024-07-12 16:02:58.992843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.831 [2024-07-12 16:02:58.992866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.831 [2024-07-12 16:02:58.992880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.831 [2024-07-12 16:02:58.995980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.831 [2024-07-12 16:02:59.005099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.831 [2024-07-12 16:02:59.005462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.831 [2024-07-12 16:02:59.005511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.831 [2024-07-12 16:02:59.005524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.831 [2024-07-12 16:02:59.005707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.831 [2024-07-12 16:02:59.005943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.831 [2024-07-12 16:02:59.005964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.831 [2024-07-12 16:02:59.005978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.831 [2024-07-12 16:02:59.008899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.831 [2024-07-12 16:02:59.018220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.831 [2024-07-12 16:02:59.018607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.831 [2024-07-12 16:02:59.018655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.831 [2024-07-12 16:02:59.018670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.831 [2024-07-12 16:02:59.018882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.831 [2024-07-12 16:02:59.019089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.831 [2024-07-12 16:02:59.019109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.831 [2024-07-12 16:02:59.019122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.831 [2024-07-12 16:02:59.021980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.832 [2024-07-12 16:02:59.031392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.832 [2024-07-12 16:02:59.031765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.832 [2024-07-12 16:02:59.031814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.832 [2024-07-12 16:02:59.031829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.832 [2024-07-12 16:02:59.032018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.832 [2024-07-12 16:02:59.032226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.832 [2024-07-12 16:02:59.032246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.832 [2024-07-12 16:02:59.032258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.832 [2024-07-12 16:02:59.035148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.832 [2024-07-12 16:02:59.044685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.832 [2024-07-12 16:02:59.045030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.832 [2024-07-12 16:02:59.045071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.832 [2024-07-12 16:02:59.045085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.832 [2024-07-12 16:02:59.045268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.832 [2024-07-12 16:02:59.045457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.832 [2024-07-12 16:02:59.045476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.832 [2024-07-12 16:02:59.045488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.832 [2024-07-12 16:02:59.048431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.832 [2024-07-12 16:02:59.058012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.832 [2024-07-12 16:02:59.058391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.832 [2024-07-12 16:02:59.058441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.832 [2024-07-12 16:02:59.058455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.832 [2024-07-12 16:02:59.058639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.832 [2024-07-12 16:02:59.058872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.832 [2024-07-12 16:02:59.058893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.832 [2024-07-12 16:02:59.058906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.832 [2024-07-12 16:02:59.061787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.832 [2024-07-12 16:02:59.071130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.832 [2024-07-12 16:02:59.071464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.832 [2024-07-12 16:02:59.071514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.832 [2024-07-12 16:02:59.071528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.832 [2024-07-12 16:02:59.071769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.832 [2024-07-12 16:02:59.071987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.832 [2024-07-12 16:02:59.072008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.832 [2024-07-12 16:02:59.072021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.832 [2024-07-12 16:02:59.075139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.832 [2024-07-12 16:02:59.084389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.832 [2024-07-12 16:02:59.084750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.832 [2024-07-12 16:02:59.084779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.832 [2024-07-12 16:02:59.084795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.832 [2024-07-12 16:02:59.085002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.832 [2024-07-12 16:02:59.085230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.832 [2024-07-12 16:02:59.085250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.832 [2024-07-12 16:02:59.085263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.832 [2024-07-12 16:02:59.088277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.832 [2024-07-12 16:02:59.097617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.832 [2024-07-12 16:02:59.097949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.832 [2024-07-12 16:02:59.097990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.832 [2024-07-12 16:02:59.098006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.832 [2024-07-12 16:02:59.098212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.832 [2024-07-12 16:02:59.098405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.832 [2024-07-12 16:02:59.098425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.832 [2024-07-12 16:02:59.098437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.832 [2024-07-12 16:02:59.101602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:01.832 [2024-07-12 16:02:59.110921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:01.832 [2024-07-12 16:02:59.111292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:01.832 [2024-07-12 16:02:59.111342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:01.832 [2024-07-12 16:02:59.111356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:01.832 [2024-07-12 16:02:59.111540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:01.832 [2024-07-12 16:02:59.111753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:01.832 [2024-07-12 16:02:59.111774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:01.832 [2024-07-12 16:02:59.111791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:01.832 [2024-07-12 16:02:59.114782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.091 [2024-07-12 16:02:59.124641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.091 [2024-07-12 16:02:59.124983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.091 [2024-07-12 16:02:59.125027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.091 [2024-07-12 16:02:59.125048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.091 [2024-07-12 16:02:59.125249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.091 [2024-07-12 16:02:59.125437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.091 [2024-07-12 16:02:59.125456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.091 [2024-07-12 16:02:59.125468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.091 [2024-07-12 16:02:59.128749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.091 [2024-07-12 16:02:59.137976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.091 [2024-07-12 16:02:59.138405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.091 [2024-07-12 16:02:59.138432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.091 [2024-07-12 16:02:59.138447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.091 [2024-07-12 16:02:59.138631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.091 [2024-07-12 16:02:59.138873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.091 [2024-07-12 16:02:59.138895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.091 [2024-07-12 16:02:59.138908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.091 [2024-07-12 16:02:59.142083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.091 [2024-07-12 16:02:59.151400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.091 [2024-07-12 16:02:59.151826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.091 [2024-07-12 16:02:59.151855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.091 [2024-07-12 16:02:59.151871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.091 [2024-07-12 16:02:59.152089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.091 [2024-07-12 16:02:59.152282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.091 [2024-07-12 16:02:59.152303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.091 [2024-07-12 16:02:59.152316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.091 [2024-07-12 16:02:59.155420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.091 [2024-07-12 16:02:59.164734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.091 [2024-07-12 16:02:59.165157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.091 [2024-07-12 16:02:59.165183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.091 [2024-07-12 16:02:59.165199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.091 [2024-07-12 16:02:59.165389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.091 [2024-07-12 16:02:59.165581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.091 [2024-07-12 16:02:59.165606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.091 [2024-07-12 16:02:59.165620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.091 [2024-07-12 16:02:59.168713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.092 [2024-07-12 16:02:59.178050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.092 [2024-07-12 16:02:59.178470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.092 [2024-07-12 16:02:59.178495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.092 [2024-07-12 16:02:59.178509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.092 [2024-07-12 16:02:59.178692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.092 [2024-07-12 16:02:59.178917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.092 [2024-07-12 16:02:59.178938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.092 [2024-07-12 16:02:59.178951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.092 [2024-07-12 16:02:59.181963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.092 [2024-07-12 16:02:59.191209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.092 [2024-07-12 16:02:59.191551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.092 [2024-07-12 16:02:59.191576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.092 [2024-07-12 16:02:59.191590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.092 [2024-07-12 16:02:59.191799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.092 [2024-07-12 16:02:59.191991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.092 [2024-07-12 16:02:59.192010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.092 [2024-07-12 16:02:59.192037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.092 [2024-07-12 16:02:59.194902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.092 [2024-07-12 16:02:59.204320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.092 [2024-07-12 16:02:59.204700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.092 [2024-07-12 16:02:59.204758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.092 [2024-07-12 16:02:59.204774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.092 [2024-07-12 16:02:59.204995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.092 [2024-07-12 16:02:59.205252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.092 [2024-07-12 16:02:59.205274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.092 [2024-07-12 16:02:59.205287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.092 [2024-07-12 16:02:59.208403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.092 [2024-07-12 16:02:59.217606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.092 [2024-07-12 16:02:59.218044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.092 [2024-07-12 16:02:59.218069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.092 [2024-07-12 16:02:59.218084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.092 [2024-07-12 16:02:59.218282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.092 [2024-07-12 16:02:59.218469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.092 [2024-07-12 16:02:59.218488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.092 [2024-07-12 16:02:59.218502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.092 [2024-07-12 16:02:59.221497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.092 [2024-07-12 16:02:59.230855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.092 [2024-07-12 16:02:59.231287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.092 [2024-07-12 16:02:59.231312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.092 [2024-07-12 16:02:59.231327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.092 [2024-07-12 16:02:59.231510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.092 [2024-07-12 16:02:59.231697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.092 [2024-07-12 16:02:59.231718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.092 [2024-07-12 16:02:59.231758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.092 [2024-07-12 16:02:59.234645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.092 [2024-07-12 16:02:59.244005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.092 [2024-07-12 16:02:59.244380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.092 [2024-07-12 16:02:59.244405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.092 [2024-07-12 16:02:59.244419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.092 [2024-07-12 16:02:59.244602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.092 [2024-07-12 16:02:59.244815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.092 [2024-07-12 16:02:59.244835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.092 [2024-07-12 16:02:59.244848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.092 [2024-07-12 16:02:59.247688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.092 [2024-07-12 16:02:59.257097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.092 [2024-07-12 16:02:59.257500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.092 [2024-07-12 16:02:59.257525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.092 [2024-07-12 16:02:59.257540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.092 [2024-07-12 16:02:59.257757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.092 [2024-07-12 16:02:59.257964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.092 [2024-07-12 16:02:59.257983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.092 [2024-07-12 16:02:59.257996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.092 [2024-07-12 16:02:59.260855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.092 [2024-07-12 16:02:59.270100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.092 [2024-07-12 16:02:59.270503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.092 [2024-07-12 16:02:59.270555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.092 [2024-07-12 16:02:59.270570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.092 [2024-07-12 16:02:59.270780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.092 [2024-07-12 16:02:59.270991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.092 [2024-07-12 16:02:59.271013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.092 [2024-07-12 16:02:59.271027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.092 [2024-07-12 16:02:59.273929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.092 [2024-07-12 16:02:59.283116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.092 [2024-07-12 16:02:59.283492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.092 [2024-07-12 16:02:59.283543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.092 [2024-07-12 16:02:59.283557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.092 [2024-07-12 16:02:59.283749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.092 [2024-07-12 16:02:59.283956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.092 [2024-07-12 16:02:59.283975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.092 [2024-07-12 16:02:59.283988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.092 [2024-07-12 16:02:59.286845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.092 [2024-07-12 16:02:59.296259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.092 [2024-07-12 16:02:59.296614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.092 [2024-07-12 16:02:59.296665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.093 [2024-07-12 16:02:59.296680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.093 [2024-07-12 16:02:59.296893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.093 [2024-07-12 16:02:59.297101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.093 [2024-07-12 16:02:59.297121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.093 [2024-07-12 16:02:59.297139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.093 [2024-07-12 16:02:59.299983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.093 [2024-07-12 16:02:59.309414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.093 [2024-07-12 16:02:59.309807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.093 [2024-07-12 16:02:59.309842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.093 [2024-07-12 16:02:59.309857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.093 [2024-07-12 16:02:59.310061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.093 [2024-07-12 16:02:59.310248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.093 [2024-07-12 16:02:59.310267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.093 [2024-07-12 16:02:59.310279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.093 [2024-07-12 16:02:59.313152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.093 [2024-07-12 16:02:59.322559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.093 [2024-07-12 16:02:59.322986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.093 [2024-07-12 16:02:59.323035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.093 [2024-07-12 16:02:59.323049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.093 [2024-07-12 16:02:59.323232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.093 [2024-07-12 16:02:59.323419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.093 [2024-07-12 16:02:59.323438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.093 [2024-07-12 16:02:59.323450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.093 [2024-07-12 16:02:59.326341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.093 [2024-07-12 16:02:59.335905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.093 [2024-07-12 16:02:59.336342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.093 [2024-07-12 16:02:59.336367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.093 [2024-07-12 16:02:59.336380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.093 [2024-07-12 16:02:59.336564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.093 [2024-07-12 16:02:59.336780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.093 [2024-07-12 16:02:59.336801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.093 [2024-07-12 16:02:59.336815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.093 [2024-07-12 16:02:59.339760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.093 [2024-07-12 16:02:59.348979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.093 [2024-07-12 16:02:59.349391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.093 [2024-07-12 16:02:59.349416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.093 [2024-07-12 16:02:59.349430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.093 [2024-07-12 16:02:59.349614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.093 [2024-07-12 16:02:59.349843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.093 [2024-07-12 16:02:59.349863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.093 [2024-07-12 16:02:59.349877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.093 [2024-07-12 16:02:59.352722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.093 [2024-07-12 16:02:59.361978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.093 [2024-07-12 16:02:59.362376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.093 [2024-07-12 16:02:59.362401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.093 [2024-07-12 16:02:59.362415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.093 [2024-07-12 16:02:59.362598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.093 [2024-07-12 16:02:59.362813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.093 [2024-07-12 16:02:59.362833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.093 [2024-07-12 16:02:59.362846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.093 [2024-07-12 16:02:59.365683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.093 [2024-07-12 16:02:59.375099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.093 [2024-07-12 16:02:59.375532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.093 [2024-07-12 16:02:59.375584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.093 [2024-07-12 16:02:59.375598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.093 [2024-07-12 16:02:59.375809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.093 [2024-07-12 16:02:59.376001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.093 [2024-07-12 16:02:59.376021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.093 [2024-07-12 16:02:59.376033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.093 [2024-07-12 16:02:59.378919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.352 [2024-07-12 16:02:59.388311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.352 [2024-07-12 16:02:59.388805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.352 [2024-07-12 16:02:59.388845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.352 [2024-07-12 16:02:59.388872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.352 [2024-07-12 16:02:59.389172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.352 [2024-07-12 16:02:59.389403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.352 [2024-07-12 16:02:59.389426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.352 [2024-07-12 16:02:59.389440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.352 [2024-07-12 16:02:59.392402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.352 [2024-07-12 16:02:59.401480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.352 [2024-07-12 16:02:59.401890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.352 [2024-07-12 16:02:59.401916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.352 [2024-07-12 16:02:59.401930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.352 [2024-07-12 16:02:59.402115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.352 [2024-07-12 16:02:59.402302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.352 [2024-07-12 16:02:59.402321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.352 [2024-07-12 16:02:59.402333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.352 [2024-07-12 16:02:59.405222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.352 [2024-07-12 16:02:59.414664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.352 [2024-07-12 16:02:59.415087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.352 [2024-07-12 16:02:59.415114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.352 [2024-07-12 16:02:59.415129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.352 [2024-07-12 16:02:59.415313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.352 [2024-07-12 16:02:59.415499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.352 [2024-07-12 16:02:59.415517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.352 [2024-07-12 16:02:59.415529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.352 [2024-07-12 16:02:59.418420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.352 [2024-07-12 16:02:59.427761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.352 [2024-07-12 16:02:59.428171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.352 [2024-07-12 16:02:59.428197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.352 [2024-07-12 16:02:59.428211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.352 [2024-07-12 16:02:59.428394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.352 [2024-07-12 16:02:59.428581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.352 [2024-07-12 16:02:59.428599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.352 [2024-07-12 16:02:59.428616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.352 [2024-07-12 16:02:59.431531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.352 [2024-07-12 16:02:59.440717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.352 [2024-07-12 16:02:59.441119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.352 [2024-07-12 16:02:59.441144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.352 [2024-07-12 16:02:59.441158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.352 [2024-07-12 16:02:59.441341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.352 [2024-07-12 16:02:59.441529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.352 [2024-07-12 16:02:59.441547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.352 [2024-07-12 16:02:59.441559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.353 [2024-07-12 16:02:59.444486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.353 [2024-07-12 16:02:59.453692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.353 [2024-07-12 16:02:59.454101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.353 [2024-07-12 16:02:59.454126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.353 [2024-07-12 16:02:59.454140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.353 [2024-07-12 16:02:59.454324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.353 [2024-07-12 16:02:59.454510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.353 [2024-07-12 16:02:59.454528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.353 [2024-07-12 16:02:59.454541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.353 [2024-07-12 16:02:59.457445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.353 [2024-07-12 16:02:59.466690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.353 [2024-07-12 16:02:59.467095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.353 [2024-07-12 16:02:59.467120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.353 [2024-07-12 16:02:59.467134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.353 [2024-07-12 16:02:59.467318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.353 [2024-07-12 16:02:59.467504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.353 [2024-07-12 16:02:59.467522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.353 [2024-07-12 16:02:59.467534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.353 [2024-07-12 16:02:59.470397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.353 [2024-07-12 16:02:59.479804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.353 [2024-07-12 16:02:59.480209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.353 [2024-07-12 16:02:59.480237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.353 [2024-07-12 16:02:59.480252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.353 [2024-07-12 16:02:59.480435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.353 [2024-07-12 16:02:59.480621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.353 [2024-07-12 16:02:59.480640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.353 [2024-07-12 16:02:59.480652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.353 [2024-07-12 16:02:59.483518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.353 [2024-07-12 16:02:59.492970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.353 [2024-07-12 16:02:59.493370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.353 [2024-07-12 16:02:59.493395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.353 [2024-07-12 16:02:59.493409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.353 [2024-07-12 16:02:59.493592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.353 [2024-07-12 16:02:59.493807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.353 [2024-07-12 16:02:59.493827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.353 [2024-07-12 16:02:59.493840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.353 [2024-07-12 16:02:59.496677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.353 [2024-07-12 16:02:59.506126] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.353 [2024-07-12 16:02:59.506524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.353 [2024-07-12 16:02:59.506548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.353 [2024-07-12 16:02:59.506563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.353 [2024-07-12 16:02:59.506769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.353 [2024-07-12 16:02:59.506962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.353 [2024-07-12 16:02:59.506981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.353 [2024-07-12 16:02:59.506994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.353 [2024-07-12 16:02:59.509888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.353 [2024-07-12 16:02:59.519185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.353 [2024-07-12 16:02:59.519594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.353 [2024-07-12 16:02:59.519619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.353 [2024-07-12 16:02:59.519634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.353 [2024-07-12 16:02:59.519863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.353 [2024-07-12 16:02:59.520066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.353 [2024-07-12 16:02:59.520086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.353 [2024-07-12 16:02:59.520100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.353 [2024-07-12 16:02:59.522958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.353 [2024-07-12 16:02:59.532155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.353 [2024-07-12 16:02:59.532533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.353 [2024-07-12 16:02:59.532558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.353 [2024-07-12 16:02:59.532572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.353 [2024-07-12 16:02:59.532782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.353 [2024-07-12 16:02:59.532976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.353 [2024-07-12 16:02:59.532994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.353 [2024-07-12 16:02:59.533007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.353 [2024-07-12 16:02:59.535866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.353 [2024-07-12 16:02:59.545288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.353 [2024-07-12 16:02:59.545683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.353 [2024-07-12 16:02:59.545708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.353 [2024-07-12 16:02:59.545721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.353 [2024-07-12 16:02:59.545938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.353 [2024-07-12 16:02:59.546164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.353 [2024-07-12 16:02:59.546184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.353 [2024-07-12 16:02:59.546198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.353 [2024-07-12 16:02:59.549060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.353 [2024-07-12 16:02:59.558424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.353 [2024-07-12 16:02:59.558848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.353 [2024-07-12 16:02:59.558873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.353 [2024-07-12 16:02:59.558895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.353 [2024-07-12 16:02:59.559078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.353 [2024-07-12 16:02:59.559265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.353 [2024-07-12 16:02:59.559284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.353 [2024-07-12 16:02:59.559297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.353 [2024-07-12 16:02:59.562190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.353 [2024-07-12 16:02:59.571525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.353 [2024-07-12 16:02:59.571902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.353 [2024-07-12 16:02:59.571927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.353 [2024-07-12 16:02:59.571941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.353 [2024-07-12 16:02:59.572144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.353 [2024-07-12 16:02:59.572331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.353 [2024-07-12 16:02:59.572351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.353 [2024-07-12 16:02:59.572364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.353 [2024-07-12 16:02:59.575307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.353 [2024-07-12 16:02:59.584587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.353 [2024-07-12 16:02:59.585035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.353 [2024-07-12 16:02:59.585063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.353 [2024-07-12 16:02:59.585094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.353 [2024-07-12 16:02:59.585304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.353 [2024-07-12 16:02:59.585536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.353 [2024-07-12 16:02:59.585559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.353 [2024-07-12 16:02:59.585574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.354 [2024-07-12 16:02:59.589069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.354 [2024-07-12 16:02:59.597829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.354 [2024-07-12 16:02:59.598260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.354 [2024-07-12 16:02:59.598286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.354 [2024-07-12 16:02:59.598301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.354 [2024-07-12 16:02:59.598486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.354 [2024-07-12 16:02:59.598672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.354 [2024-07-12 16:02:59.598692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.354 [2024-07-12 16:02:59.598704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.354 [2024-07-12 16:02:59.601681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.354 [2024-07-12 16:02:59.610946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.354 [2024-07-12 16:02:59.611340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.354 [2024-07-12 16:02:59.611364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.354 [2024-07-12 16:02:59.611383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.354 [2024-07-12 16:02:59.611567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.354 [2024-07-12 16:02:59.611779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.354 [2024-07-12 16:02:59.611800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.354 [2024-07-12 16:02:59.611812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.354 [2024-07-12 16:02:59.614652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.354 [2024-07-12 16:02:59.624203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.354 [2024-07-12 16:02:59.624578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.354 [2024-07-12 16:02:59.624603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.354 [2024-07-12 16:02:59.624617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.354 [2024-07-12 16:02:59.624844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.354 [2024-07-12 16:02:59.625043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.354 [2024-07-12 16:02:59.625065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.354 [2024-07-12 16:02:59.625079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.354 [2024-07-12 16:02:59.627935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.354 [2024-07-12 16:02:59.637282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.354 [2024-07-12 16:02:59.637677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.354 [2024-07-12 16:02:59.637718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.354 [2024-07-12 16:02:59.637732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.354 [2024-07-12 16:02:59.637951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.354 [2024-07-12 16:02:59.638155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.354 [2024-07-12 16:02:59.638176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.354 [2024-07-12 16:02:59.638188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.354 [2024-07-12 16:02:59.641189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.613 [2024-07-12 16:02:59.650827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.613 [2024-07-12 16:02:59.651215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.613 [2024-07-12 16:02:59.651243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.613 [2024-07-12 16:02:59.651259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.613 [2024-07-12 16:02:59.651444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.613 [2024-07-12 16:02:59.651631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.613 [2024-07-12 16:02:59.651656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.613 [2024-07-12 16:02:59.651670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.613 [2024-07-12 16:02:59.654635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.613 [2024-07-12 16:02:59.663857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.613 [2024-07-12 16:02:59.664281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.613 [2024-07-12 16:02:59.664307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.613 [2024-07-12 16:02:59.664322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.613 [2024-07-12 16:02:59.664506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.613 [2024-07-12 16:02:59.664701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.613 [2024-07-12 16:02:59.664721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.613 [2024-07-12 16:02:59.664734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.613 [2024-07-12 16:02:59.667642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.613 [2024-07-12 16:02:59.676819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.613 [2024-07-12 16:02:59.677215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.613 [2024-07-12 16:02:59.677241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.613 [2024-07-12 16:02:59.677256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.613 [2024-07-12 16:02:59.677440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.613 [2024-07-12 16:02:59.677626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.613 [2024-07-12 16:02:59.677647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.613 [2024-07-12 16:02:59.677659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.613 [2024-07-12 16:02:59.680548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.613 [2024-07-12 16:02:59.689941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.613 [2024-07-12 16:02:59.690320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.613 [2024-07-12 16:02:59.690345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.613 [2024-07-12 16:02:59.690360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.613 [2024-07-12 16:02:59.690543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.613 [2024-07-12 16:02:59.690731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.613 [2024-07-12 16:02:59.690779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.613 [2024-07-12 16:02:59.690793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.613 [2024-07-12 16:02:59.693659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.613 [2024-07-12 16:02:59.703033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.613 [2024-07-12 16:02:59.703425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.613 [2024-07-12 16:02:59.703451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.613 [2024-07-12 16:02:59.703466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.613 [2024-07-12 16:02:59.703651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.613 [2024-07-12 16:02:59.703884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.613 [2024-07-12 16:02:59.703906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.613 [2024-07-12 16:02:59.703920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.613 [2024-07-12 16:02:59.706787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.613 [2024-07-12 16:02:59.716054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.613 [2024-07-12 16:02:59.716456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.613 [2024-07-12 16:02:59.716482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.613 [2024-07-12 16:02:59.716496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.613 [2024-07-12 16:02:59.716681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.613 [2024-07-12 16:02:59.716918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.613 [2024-07-12 16:02:59.716940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.613 [2024-07-12 16:02:59.716954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.613 [2024-07-12 16:02:59.719835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.613 [2024-07-12 16:02:59.729141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.613 [2024-07-12 16:02:59.729479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.613 [2024-07-12 16:02:59.729505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.613 [2024-07-12 16:02:59.729520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.613 [2024-07-12 16:02:59.729703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.613 [2024-07-12 16:02:59.729920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.613 [2024-07-12 16:02:59.729942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.613 [2024-07-12 16:02:59.729955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.613 [2024-07-12 16:02:59.732814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.613 [2024-07-12 16:02:59.742237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.613 [2024-07-12 16:02:59.742655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.613 [2024-07-12 16:02:59.742704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.613 [2024-07-12 16:02:59.742718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.613 [2024-07-12 16:02:59.742952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.613 [2024-07-12 16:02:59.743176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.613 [2024-07-12 16:02:59.743195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.613 [2024-07-12 16:02:59.743207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.613 [2024-07-12 16:02:59.746069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.613 [2024-07-12 16:02:59.755274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.613 [2024-07-12 16:02:59.755675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.613 [2024-07-12 16:02:59.755699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.613 [2024-07-12 16:02:59.755713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.613 [2024-07-12 16:02:59.755926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.613 [2024-07-12 16:02:59.756131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.613 [2024-07-12 16:02:59.756150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.613 [2024-07-12 16:02:59.756163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.613 [2024-07-12 16:02:59.759061] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.613 [2024-07-12 16:02:59.768274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.613 [2024-07-12 16:02:59.768695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.613 [2024-07-12 16:02:59.768719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.613 [2024-07-12 16:02:59.768733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.613 [2024-07-12 16:02:59.768946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.613 [2024-07-12 16:02:59.769151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.613 [2024-07-12 16:02:59.769169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.613 [2024-07-12 16:02:59.769182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.613 [2024-07-12 16:02:59.771926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.613 [2024-07-12 16:02:59.781345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.613 [2024-07-12 16:02:59.781783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.613 [2024-07-12 16:02:59.781809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.613 [2024-07-12 16:02:59.781823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.613 [2024-07-12 16:02:59.782006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.614 [2024-07-12 16:02:59.782193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.614 [2024-07-12 16:02:59.782212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.614 [2024-07-12 16:02:59.782228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.614 [2024-07-12 16:02:59.785142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.614 [2024-07-12 16:02:59.794403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.614 [2024-07-12 16:02:59.794808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.614 [2024-07-12 16:02:59.794834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.614 [2024-07-12 16:02:59.794848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.614 [2024-07-12 16:02:59.795031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.614 [2024-07-12 16:02:59.795218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.614 [2024-07-12 16:02:59.795236] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.614 [2024-07-12 16:02:59.795249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.614 [2024-07-12 16:02:59.798160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.614 [2024-07-12 16:02:59.807604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.614 [2024-07-12 16:02:59.808016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.614 [2024-07-12 16:02:59.808042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.614 [2024-07-12 16:02:59.808071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.614 [2024-07-12 16:02:59.808256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.614 [2024-07-12 16:02:59.808461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.614 [2024-07-12 16:02:59.808482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.614 [2024-07-12 16:02:59.808495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.614 [2024-07-12 16:02:59.811404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.614 [2024-07-12 16:02:59.820689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.614 [2024-07-12 16:02:59.821115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.614 [2024-07-12 16:02:59.821140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.614 [2024-07-12 16:02:59.821154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.614 [2024-07-12 16:02:59.821338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.614 [2024-07-12 16:02:59.821524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.614 [2024-07-12 16:02:59.821543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.614 [2024-07-12 16:02:59.821555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.614 [2024-07-12 16:02:59.824495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.614 [2024-07-12 16:02:59.833859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.614 [2024-07-12 16:02:59.834270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.614 [2024-07-12 16:02:59.834295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.614 [2024-07-12 16:02:59.834310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.614 [2024-07-12 16:02:59.834513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.614 [2024-07-12 16:02:59.834700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.614 [2024-07-12 16:02:59.834719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.614 [2024-07-12 16:02:59.834731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.614 [2024-07-12 16:02:59.838279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.614 [2024-07-12 16:02:59.847054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.614 [2024-07-12 16:02:59.847472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.614 [2024-07-12 16:02:59.847497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.614 [2024-07-12 16:02:59.847510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.614 [2024-07-12 16:02:59.847694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.614 [2024-07-12 16:02:59.847936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.614 [2024-07-12 16:02:59.847957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.614 [2024-07-12 16:02:59.847971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.614 [2024-07-12 16:02:59.850943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.614 [2024-07-12 16:02:59.860321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.614 [2024-07-12 16:02:59.860680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.614 [2024-07-12 16:02:59.860706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.614 [2024-07-12 16:02:59.860735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.614 [2024-07-12 16:02:59.860956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.614 [2024-07-12 16:02:59.861177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.614 [2024-07-12 16:02:59.861198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.614 [2024-07-12 16:02:59.861211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.614 [2024-07-12 16:02:59.863996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.614 [2024-07-12 16:02:59.873671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.614 [2024-07-12 16:02:59.874160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.614 [2024-07-12 16:02:59.874185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.614 [2024-07-12 16:02:59.874199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.614 [2024-07-12 16:02:59.874388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.614 [2024-07-12 16:02:59.874604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.614 [2024-07-12 16:02:59.874626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.614 [2024-07-12 16:02:59.874639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.614 [2024-07-12 16:02:59.877778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.614 [2024-07-12 16:02:59.887142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.614 [2024-07-12 16:02:59.887527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.614 [2024-07-12 16:02:59.887581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.614 [2024-07-12 16:02:59.887597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.614 [2024-07-12 16:02:59.887837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.614 [2024-07-12 16:02:59.888063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.614 [2024-07-12 16:02:59.888097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.614 [2024-07-12 16:02:59.888110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.614 [2024-07-12 16:02:59.891254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.614 [2024-07-12 16:02:59.900445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.614 [2024-07-12 16:02:59.900824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.614 [2024-07-12 16:02:59.900851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.614 [2024-07-12 16:02:59.900866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.614 [2024-07-12 16:02:59.901079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.614 [2024-07-12 16:02:59.901317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.614 [2024-07-12 16:02:59.901339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.614 [2024-07-12 16:02:59.901361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.614 [2024-07-12 16:02:59.904867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.874 [2024-07-12 16:02:59.913774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.874 [2024-07-12 16:02:59.914219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-07-12 16:02:59.914271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.874 [2024-07-12 16:02:59.914286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.874 [2024-07-12 16:02:59.914488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.874 [2024-07-12 16:02:59.914681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.874 [2024-07-12 16:02:59.914700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.874 [2024-07-12 16:02:59.914713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.874 [2024-07-12 16:02:59.917754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.874 [2024-07-12 16:02:59.927081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.874 [2024-07-12 16:02:59.927425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-07-12 16:02:59.927450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.874 [2024-07-12 16:02:59.927464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.874 [2024-07-12 16:02:59.927652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.874 [2024-07-12 16:02:59.927881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.874 [2024-07-12 16:02:59.927902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.874 [2024-07-12 16:02:59.927916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.874 [2024-07-12 16:02:59.930949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.874 [2024-07-12 16:02:59.940306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.874 [2024-07-12 16:02:59.940678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-07-12 16:02:59.940716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.874 [2024-07-12 16:02:59.940730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.874 [2024-07-12 16:02:59.940958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.874 [2024-07-12 16:02:59.941190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.874 [2024-07-12 16:02:59.941209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.874 [2024-07-12 16:02:59.941221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.874 [2024-07-12 16:02:59.944145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.874 [2024-07-12 16:02:59.953515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.874 [2024-07-12 16:02:59.953885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-07-12 16:02:59.953925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.874 [2024-07-12 16:02:59.953939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.874 [2024-07-12 16:02:59.954160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.874 [2024-07-12 16:02:59.954352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.874 [2024-07-12 16:02:59.954370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.874 [2024-07-12 16:02:59.954383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.874 [2024-07-12 16:02:59.957280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.874 [2024-07-12 16:02:59.966594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.874 [2024-07-12 16:02:59.966946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-07-12 16:02:59.966975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.874 [2024-07-12 16:02:59.967004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.874 [2024-07-12 16:02:59.967206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.874 [2024-07-12 16:02:59.967398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.874 [2024-07-12 16:02:59.967416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.874 [2024-07-12 16:02:59.967428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.874 [2024-07-12 16:02:59.970326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.874 [2024-07-12 16:02:59.979771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.874 [2024-07-12 16:02:59.980117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-07-12 16:02:59.980141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.874 [2024-07-12 16:02:59.980155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.874 [2024-07-12 16:02:59.980343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.874 [2024-07-12 16:02:59.980534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.874 [2024-07-12 16:02:59.980553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.874 [2024-07-12 16:02:59.980565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.874 [2024-07-12 16:02:59.983373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.874 [2024-07-12 16:02:59.992830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.874 [2024-07-12 16:02:59.993175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-07-12 16:02:59.993199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.874 [2024-07-12 16:02:59.993213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.874 [2024-07-12 16:02:59.993402] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.874 [2024-07-12 16:02:59.993593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.874 [2024-07-12 16:02:59.993611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.874 [2024-07-12 16:02:59.993623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.874 [2024-07-12 16:02:59.996432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.874 [2024-07-12 16:03:00.006331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.874 [2024-07-12 16:03:00.006748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-07-12 16:03:00.006786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.874 [2024-07-12 16:03:00.006808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.874 [2024-07-12 16:03:00.007081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.874 [2024-07-12 16:03:00.007367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.874 [2024-07-12 16:03:00.007396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.874 [2024-07-12 16:03:00.007415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.874 [2024-07-12 16:03:00.010923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.874 [2024-07-12 16:03:00.019636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.874 [2024-07-12 16:03:00.020026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.874 [2024-07-12 16:03:00.020068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.874 [2024-07-12 16:03:00.020083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.874 [2024-07-12 16:03:00.020272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.874 [2024-07-12 16:03:00.020464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.874 [2024-07-12 16:03:00.020484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.875 [2024-07-12 16:03:00.020496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.875 [2024-07-12 16:03:00.023571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.875 [2024-07-12 16:03:00.033334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.875 [2024-07-12 16:03:00.033699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-07-12 16:03:00.033745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.875 [2024-07-12 16:03:00.033761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.875 [2024-07-12 16:03:00.033987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.875 [2024-07-12 16:03:00.034209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.875 [2024-07-12 16:03:00.034229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.875 [2024-07-12 16:03:00.034241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.875 [2024-07-12 16:03:00.037349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 859563 Killed "${NVMF_APP[@]}" "$@" 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.875 [2024-07-12 16:03:00.046782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.875 [2024-07-12 16:03:00.047166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-07-12 16:03:00.047206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.875 [2024-07-12 16:03:00.047221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.875 [2024-07-12 16:03:00.047435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.875 [2024-07-12 16:03:00.047665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.875 [2024-07-12 16:03:00.047686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.875 [2024-07-12 16:03:00.047699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=860538 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 860538 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 860538 ']' 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:02.875 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.875 [2024-07-12 16:03:00.050906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.875 [2024-07-12 16:03:00.060211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.875 [2024-07-12 16:03:00.060592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-07-12 16:03:00.060633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.875 [2024-07-12 16:03:00.060649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.875 [2024-07-12 16:03:00.060906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.875 [2024-07-12 16:03:00.061152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.875 [2024-07-12 16:03:00.061172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.875 [2024-07-12 16:03:00.061186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.875 [2024-07-12 16:03:00.064359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.875 [2024-07-12 16:03:00.073680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.875 [2024-07-12 16:03:00.074066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-07-12 16:03:00.074108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.875 [2024-07-12 16:03:00.074123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.875 [2024-07-12 16:03:00.074317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.875 [2024-07-12 16:03:00.074514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.875 [2024-07-12 16:03:00.074533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.875 [2024-07-12 16:03:00.074546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.875 [2024-07-12 16:03:00.077688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.875 [2024-07-12 16:03:00.087138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.875 [2024-07-12 16:03:00.087558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-07-12 16:03:00.087600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.875 [2024-07-12 16:03:00.087615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.875 [2024-07-12 16:03:00.087861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.875 [2024-07-12 16:03:00.088091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.875 [2024-07-12 16:03:00.088112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.875 [2024-07-12 16:03:00.088126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.875 [2024-07-12 16:03:00.091578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.875 [2024-07-12 16:03:00.095562] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:26:02.875 [2024-07-12 16:03:00.095633] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:02.875 [2024-07-12 16:03:00.100599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.875 [2024-07-12 16:03:00.100985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-07-12 16:03:00.101015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.875 [2024-07-12 16:03:00.101031] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.875 [2024-07-12 16:03:00.101258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.875 [2024-07-12 16:03:00.101466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.875 [2024-07-12 16:03:00.101495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.875 [2024-07-12 16:03:00.101508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.875 [2024-07-12 16:03:00.104636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.875 [2024-07-12 16:03:00.114137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.875 [2024-07-12 16:03:00.114581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-07-12 16:03:00.114607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.875 [2024-07-12 16:03:00.114638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.875 [2024-07-12 16:03:00.114871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.875 [2024-07-12 16:03:00.115109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.875 [2024-07-12 16:03:00.115129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.875 [2024-07-12 16:03:00.115142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.875 [2024-07-12 16:03:00.118477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.875 [2024-07-12 16:03:00.127736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.875 [2024-07-12 16:03:00.128182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-07-12 16:03:00.128208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.875 [2024-07-12 16:03:00.128237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.875 [2024-07-12 16:03:00.128438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.875 [2024-07-12 16:03:00.128672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.875 [2024-07-12 16:03:00.128693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.875 [2024-07-12 16:03:00.128706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.875 [2024-07-12 16:03:00.132007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.875 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.875 [2024-07-12 16:03:00.141367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.875 [2024-07-12 16:03:00.141710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.875 [2024-07-12 16:03:00.141760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.875 [2024-07-12 16:03:00.141794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.875 [2024-07-12 16:03:00.142011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.875 [2024-07-12 16:03:00.142233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.875 [2024-07-12 16:03:00.142254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.876 [2024-07-12 16:03:00.142267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.876 [2024-07-12 16:03:00.145441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.876 [2024-07-12 16:03:00.154904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.876 [2024-07-12 16:03:00.155298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.876 [2024-07-12 16:03:00.155323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:02.876 [2024-07-12 16:03:00.155353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:02.876 [2024-07-12 16:03:00.155553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:02.876 [2024-07-12 16:03:00.155792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.876 [2024-07-12 16:03:00.155815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.876 [2024-07-12 16:03:00.155828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.876 [2024-07-12 16:03:00.159000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.135 [2024-07-12 16:03:00.168537] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.135 [2024-07-12 16:03:00.168888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.135 [2024-07-12 16:03:00.168917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.135 [2024-07-12 16:03:00.168934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.135 [2024-07-12 16:03:00.169158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.135 [2024-07-12 16:03:00.169363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.135 [2024-07-12 16:03:00.169382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.135 [2024-07-12 16:03:00.169395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.135 [2024-07-12 16:03:00.170155] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:03.135 [2024-07-12 16:03:00.172874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.135 [2024-07-12 16:03:00.182123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.135 [2024-07-12 16:03:00.182599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.135 [2024-07-12 16:03:00.182636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.135 [2024-07-12 16:03:00.182654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.135 [2024-07-12 16:03:00.182894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.135 [2024-07-12 16:03:00.183133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.135 [2024-07-12 16:03:00.183153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.135 [2024-07-12 16:03:00.183176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.135 [2024-07-12 16:03:00.186412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.135 [2024-07-12 16:03:00.195609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.135 [2024-07-12 16:03:00.195977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.135 [2024-07-12 16:03:00.196009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.135 [2024-07-12 16:03:00.196040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.135 [2024-07-12 16:03:00.196264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.135 [2024-07-12 16:03:00.196469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.135 [2024-07-12 16:03:00.196489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.135 [2024-07-12 16:03:00.196503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.135 [2024-07-12 16:03:00.199667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.135 [2024-07-12 16:03:00.209215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.135 [2024-07-12 16:03:00.209614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.135 [2024-07-12 16:03:00.209642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.135 [2024-07-12 16:03:00.209657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.135 [2024-07-12 16:03:00.209894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.135 [2024-07-12 16:03:00.210140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.135 [2024-07-12 16:03:00.210171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.135 [2024-07-12 16:03:00.210212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.135 [2024-07-12 16:03:00.213466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.135 [2024-07-12 16:03:00.222995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.135 [2024-07-12 16:03:00.223444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.135 [2024-07-12 16:03:00.223485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.135 [2024-07-12 16:03:00.223501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.135 [2024-07-12 16:03:00.223732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.135 [2024-07-12 16:03:00.223961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.135 [2024-07-12 16:03:00.223982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.135 [2024-07-12 16:03:00.223997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.135 [2024-07-12 16:03:00.227183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.135 [2024-07-12 16:03:00.236470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.135 [2024-07-12 16:03:00.236920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.135 [2024-07-12 16:03:00.236956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.135 [2024-07-12 16:03:00.236975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.135 [2024-07-12 16:03:00.237217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.135 [2024-07-12 16:03:00.237435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.135 [2024-07-12 16:03:00.237455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.135 [2024-07-12 16:03:00.237471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.135 [2024-07-12 16:03:00.240654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.135 [2024-07-12 16:03:00.249994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.135 [2024-07-12 16:03:00.250494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.135 [2024-07-12 16:03:00.250537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.135 [2024-07-12 16:03:00.250554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.135 [2024-07-12 16:03:00.250788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.135 [2024-07-12 16:03:00.251007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.135 [2024-07-12 16:03:00.251043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.135 [2024-07-12 16:03:00.251057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.135 [2024-07-12 16:03:00.254198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.135 [2024-07-12 16:03:00.263416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.135 [2024-07-12 16:03:00.263792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.135 [2024-07-12 16:03:00.263835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.135 [2024-07-12 16:03:00.263851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.135 [2024-07-12 16:03:00.264086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.135 [2024-07-12 16:03:00.264290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.135 [2024-07-12 16:03:00.264310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.135 [2024-07-12 16:03:00.264323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.135 [2024-07-12 16:03:00.267426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.135 [2024-07-12 16:03:00.276898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.135 [2024-07-12 16:03:00.277304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.135 [2024-07-12 16:03:00.277343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.135 [2024-07-12 16:03:00.277358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.135 [2024-07-12 16:03:00.277573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.135 [2024-07-12 16:03:00.277805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.135 [2024-07-12 16:03:00.277826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.135 [2024-07-12 16:03:00.277840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.135 [2024-07-12 16:03:00.280961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.135 [2024-07-12 16:03:00.285939] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.135 [2024-07-12 16:03:00.285974] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.135 [2024-07-12 16:03:00.286004] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.135 [2024-07-12 16:03:00.286016] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.135 [2024-07-12 16:03:00.286026] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.135 [2024-07-12 16:03:00.286179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.136 [2024-07-12 16:03:00.286243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.136 [2024-07-12 16:03:00.286247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.136 [2024-07-12 16:03:00.290559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.136 [2024-07-12 16:03:00.290986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.136 [2024-07-12 16:03:00.291032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.136 [2024-07-12 16:03:00.291050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.136 [2024-07-12 16:03:00.291282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.136 [2024-07-12 16:03:00.291501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.136 [2024-07-12 16:03:00.291529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.136 [2024-07-12 16:03:00.291545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.136 [2024-07-12 16:03:00.294826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.136 [2024-07-12 16:03:00.304205] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.136 [2024-07-12 16:03:00.304703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.136 [2024-07-12 16:03:00.304759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.136 [2024-07-12 16:03:00.304779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.136 [2024-07-12 16:03:00.305027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.136 [2024-07-12 16:03:00.305263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.136 [2024-07-12 16:03:00.305284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.136 [2024-07-12 16:03:00.305300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.136 [2024-07-12 16:03:00.308512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.136 [2024-07-12 16:03:00.317769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.136 [2024-07-12 16:03:00.318303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.136 [2024-07-12 16:03:00.318340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.136 [2024-07-12 16:03:00.318373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.136 [2024-07-12 16:03:00.318588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.136 [2024-07-12 16:03:00.318812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.136 [2024-07-12 16:03:00.318834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.136 [2024-07-12 16:03:00.318850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.136 [2024-07-12 16:03:00.322010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.136 [2024-07-12 16:03:00.331415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.136 [2024-07-12 16:03:00.331990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.136 [2024-07-12 16:03:00.332038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.136 [2024-07-12 16:03:00.332057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.136 [2024-07-12 16:03:00.332286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.136 [2024-07-12 16:03:00.332500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.136 [2024-07-12 16:03:00.332522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.136 [2024-07-12 16:03:00.332538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.136 [2024-07-12 16:03:00.335774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.136 [2024-07-12 16:03:00.344965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.136 [2024-07-12 16:03:00.345514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.136 [2024-07-12 16:03:00.345549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.136 [2024-07-12 16:03:00.345568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.136 [2024-07-12 16:03:00.345807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.136 [2024-07-12 16:03:00.346028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.136 [2024-07-12 16:03:00.346053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.136 [2024-07-12 16:03:00.346068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.136 [2024-07-12 16:03:00.349272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.136 [2024-07-12 16:03:00.358586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.136 [2024-07-12 16:03:00.359098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.136 [2024-07-12 16:03:00.359134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.136 [2024-07-12 16:03:00.359153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.136 [2024-07-12 16:03:00.359390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.136 [2024-07-12 16:03:00.359627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.136 [2024-07-12 16:03:00.359648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.136 [2024-07-12 16:03:00.359664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.136 [2024-07-12 16:03:00.362950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.136 [2024-07-12 16:03:00.372184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.136 [2024-07-12 16:03:00.372519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.136 [2024-07-12 16:03:00.372555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.136 [2024-07-12 16:03:00.372571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.136 [2024-07-12 16:03:00.372789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.136 [2024-07-12 16:03:00.372999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.136 [2024-07-12 16:03:00.373020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.136 [2024-07-12 16:03:00.373041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.136 [2024-07-12 16:03:00.376229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.136 [2024-07-12 16:03:00.385850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.136 [2024-07-12 16:03:00.386244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.136 [2024-07-12 16:03:00.386271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.136 [2024-07-12 16:03:00.386301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.136 [2024-07-12 16:03:00.386523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.136 [2024-07-12 16:03:00.386758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.136 [2024-07-12 16:03:00.386780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.136 [2024-07-12 16:03:00.386794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.136 [2024-07-12 16:03:00.390045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.136 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:03.136 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:03.136 16:03:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:03.136 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:03.136 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:03.136 [2024-07-12 16:03:00.399473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.136 [2024-07-12 16:03:00.399883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.136 [2024-07-12 16:03:00.399911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.136 [2024-07-12 16:03:00.399927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.136 [2024-07-12 16:03:00.400153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.136 [2024-07-12 16:03:00.400365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.136 [2024-07-12 16:03:00.400386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.136 [2024-07-12 16:03:00.400399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.136 [2024-07-12 16:03:00.403612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.136 [2024-07-12 16:03:00.413042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.136 [2024-07-12 16:03:00.413401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.136 [2024-07-12 16:03:00.413428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.136 [2024-07-12 16:03:00.413443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.136 [2024-07-12 16:03:00.413650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.136 [2024-07-12 16:03:00.413891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.136 [2024-07-12 16:03:00.413913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.136 [2024-07-12 16:03:00.413926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.136 16:03:00 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.136 16:03:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:03.136 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.137 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:03.137 [2024-07-12 16:03:00.417140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.137 [2024-07-12 16:03:00.418517] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.137 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.137 16:03:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:03.137 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.137 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:03.137 [2024-07-12 16:03:00.426702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.137 [2024-07-12 16:03:00.427058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.137 [2024-07-12 16:03:00.427087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.137 [2024-07-12 16:03:00.427104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.395 [2024-07-12 16:03:00.427318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.395 [2024-07-12 16:03:00.427536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.395 [2024-07-12 16:03:00.427557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.395 [2024-07-12 16:03:00.427570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.395 [2024-07-12 16:03:00.430812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.395 [2024-07-12 16:03:00.440319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.395 [2024-07-12 16:03:00.440689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.395 [2024-07-12 16:03:00.440715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.395 [2024-07-12 16:03:00.440754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.395 [2024-07-12 16:03:00.440969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.395 [2024-07-12 16:03:00.441192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.395 [2024-07-12 16:03:00.441212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.395 [2024-07-12 16:03:00.441225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.395 [2024-07-12 16:03:00.444396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.395 [2024-07-12 16:03:00.453843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.395 [2024-07-12 16:03:00.454334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.395 [2024-07-12 16:03:00.454368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.395 [2024-07-12 16:03:00.454400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.395 [2024-07-12 16:03:00.454612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.395 [2024-07-12 16:03:00.454843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.395 [2024-07-12 16:03:00.454864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.395 [2024-07-12 16:03:00.454880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.395 [2024-07-12 16:03:00.458079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.395 Malloc0 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:03.395 [2024-07-12 16:03:00.467428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.395 [2024-07-12 16:03:00.467851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.395 [2024-07-12 16:03:00.467883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.395 [2024-07-12 16:03:00.467902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.395 [2024-07-12 16:03:00.468121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.395 [2024-07-12 16:03:00.468342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.395 [2024-07-12 16:03:00.468363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.395 [2024-07-12 16:03:00.468380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.395 [2024-07-12 16:03:00.471610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:03.395 [2024-07-12 16:03:00.480899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.395 [2024-07-12 16:03:00.481304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.395 [2024-07-12 16:03:00.481330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b080 with addr=10.0.0.2, port=4420 00:26:03.395 [2024-07-12 16:03:00.481360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b080 is same with the state(5) to be set 00:26:03.395 [2024-07-12 16:03:00.481567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b080 (9): Bad file descriptor 00:26:03.395 [2024-07-12 16:03:00.481808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:03.395 [2024-07-12 16:03:00.481830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:03.395 [2024-07-12 16:03:00.481844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:03.395 [2024-07-12 16:03:00.485105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.395 [2024-07-12 16:03:00.486077] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.395 16:03:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 859850 00:26:03.395 [2024-07-12 16:03:00.494388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:03.395 [2024-07-12 16:03:00.523172] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:13.356 00:26:13.356 Latency(us) 00:26:13.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.356 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:13.356 Verification LBA range: start 0x0 length 0x4000 00:26:13.356 Nvme1n1 : 15.00 6836.52 26.71 10177.50 0.00 7501.05 831.34 18738.44 00:26:13.356 =================================================================================================================== 00:26:13.356 Total : 6836.52 26.71 10177.50 0.00 7501.05 831.34 18738.44 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:13.356 rmmod nvme_tcp 00:26:13.356 rmmod nvme_fabrics 00:26:13.356 rmmod nvme_keyring 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 860538 ']' 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 860538 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 860538 ']' 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 860538 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 860538 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 860538' 00:26:13.356 killing process with pid 860538 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 860538 00:26:13.356 16:03:09 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 860538 00:26:13.356 16:03:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:13.356 16:03:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:13.356 16:03:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:13.356 16:03:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.356 16:03:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:13.356 16:03:10 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.356 16:03:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.356 16:03:10 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.258 16:03:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:15.258 00:26:15.258 real 0m22.550s 00:26:15.258 user 0m59.838s 00:26:15.258 sys 0m4.611s 00:26:15.258 16:03:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:15.258 16:03:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:15.258 ************************************ 00:26:15.258 END TEST nvmf_bdevperf 00:26:15.258 ************************************ 00:26:15.258 16:03:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:15.258 16:03:12 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:15.258 16:03:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:15.258 16:03:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:15.258 16:03:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:15.258 ************************************ 00:26:15.258 START TEST nvmf_target_disconnect 00:26:15.258 ************************************ 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:15.258 * Looking for test storage... 00:26:15.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:26:15.258 16:03:12 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.157 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:17.158 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:17.158 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:17.158 Found net devices under 0000:84:00.0: cvl_0_0 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:17.158 Found net devices under 0000:84:00.1: cvl_0_1 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:17.158 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:17.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:26:17.417 00:26:17.417 --- 10.0.0.2 ping statistics --- 00:26:17.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.417 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:26:17.417 00:26:17.417 --- 10.0.0.1 ping statistics --- 00:26:17.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.417 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:17.417 ************************************ 00:26:17.417 START TEST nvmf_target_disconnect_tc1 00:26:17.417 ************************************ 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:17.417 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.417 [2024-07-12 16:03:14.619480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.417 [2024-07-12 16:03:14.619542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x603930 with addr=10.0.0.2, port=4420 00:26:17.417 [2024-07-12 16:03:14.619573] nvme_tcp.c:2712:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:17.417 [2024-07-12 16:03:14.619590] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:17.417 [2024-07-12 16:03:14.619601] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:17.417 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:17.417 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:17.417 Initializing NVMe Controllers 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:17.417 00:26:17.417 real 0m0.094s 00:26:17.417 user 0m0.043s 00:26:17.417 sys 0m0.050s 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:17.417 ************************************ 00:26:17.417 END TEST nvmf_target_disconnect_tc1 00:26:17.417 ************************************ 00:26:17.417 16:03:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:17.418 ************************************ 00:26:17.418 START TEST nvmf_target_disconnect_tc2 00:26:17.418 ************************************ 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=864321 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 864321 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 864321 ']' 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:17.418 16:03:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:17.676 [2024-07-12 16:03:14.733709] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:26:17.676 [2024-07-12 16:03:14.733812] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.676 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.676 [2024-07-12 16:03:14.797671] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:17.677 [2024-07-12 16:03:14.906989] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.677 [2024-07-12 16:03:14.907063] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.677 [2024-07-12 16:03:14.907077] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.677 [2024-07-12 16:03:14.907098] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.677 [2024-07-12 16:03:14.907107] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.677 [2024-07-12 16:03:14.907191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:17.677 [2024-07-12 16:03:14.907255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:17.677 [2024-07-12 16:03:14.907279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:17.677 [2024-07-12 16:03:14.907282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:17.935 Malloc0 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:17.935 [2024-07-12 16:03:15.092910] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:17.935 [2024-07-12 16:03:15.121160] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=864464 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:17.935 16:03:15 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:17.935 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.482 16:03:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 864321 00:26:20.482 16:03:17 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 [2024-07-12 16:03:17.146198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 [2024-07-12 16:03:17.146520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 [2024-07-12 16:03:17.146881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Read completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.482 Write completed with error (sct=0, sc=8) 00:26:20.482 starting I/O failed 00:26:20.483 Write completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Write completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Write completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Write completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Write completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Read completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Write completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Write completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Write completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Write completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Read completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Read completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Read completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 Read completed with error (sct=0, sc=8) 00:26:20.483 starting I/O failed 00:26:20.483 [2024-07-12 16:03:17.147220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:20.483 [2024-07-12 16:03:17.147381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.147413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.147591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.147644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.147819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.147845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.147950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.147977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.148119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.148143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.148322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.148345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.148512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.148549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.148748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.148786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.148892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.148918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.149082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.149120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.149302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.149329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.149430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.149454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.149561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.149585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.149806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.149832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.149945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.149970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.150101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.150139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.150289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.150323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.150476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.150498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.150675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.150698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.150830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.150856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.150961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.150987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.151133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.151170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.151279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.151302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.151484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.151508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.151675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.151699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.151859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.151886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.152000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.152039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.152156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.152203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.152325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.152349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.152514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.152537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.152700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.152747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.152847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.152874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.152972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.152997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.153158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.153195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.153334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.483 [2024-07-12 16:03:17.153358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.483 qpair failed and we were unable to recover it. 00:26:20.483 [2024-07-12 16:03:17.153461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.153483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.153662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.153687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.153829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.153855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.153989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.154015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.154203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.154226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.154422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.154448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.154600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.154623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.154790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.154816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.154908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.154934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.155113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.155136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.155308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.155361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.155459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.155483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.155639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.155664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.155822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.155848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.155973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.156038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.156176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.156202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.156342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.156366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.156479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.156504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.156674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.156700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.156846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.156873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.156960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.156987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.157181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.157204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.157382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.157405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.157528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.157577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.157762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.157813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.157916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.157942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.158134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.158194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.158343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.158396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.158533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.158558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.158731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.158765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.158868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.158893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.159045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.159070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.159200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.159241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.159416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.159439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.159578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.159603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.159756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.159809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.159928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.159968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.160127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.160153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.160322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.160360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.160518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.160567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.484 [2024-07-12 16:03:17.160728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.484 [2024-07-12 16:03:17.160762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.484 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.160900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.160938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.161081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.161121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.161279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.161303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.161491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.161546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.161690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.161731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.161862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.161888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.162058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.162096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.162248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.162272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.162413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.162437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.162579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.162603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.162763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.162789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.162907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.162932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.163095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.163133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.163273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.163330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.163492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.163516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.163652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.163691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.163829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.163857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.163964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.163991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.164128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.164178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.164404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.164446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.164610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.164635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.164752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.164778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.164878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.164902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.165055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.165105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.165268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.165303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.165541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.165565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.165703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.165728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.165844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.165877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.166003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.166042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.166129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.166153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.166297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.166321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.166461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.166486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.166626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.166651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.166796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.166821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.166919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.166944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.167063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.167087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.167255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.167278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.167459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.167483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.167638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.167662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.167800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.167838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.485 [2024-07-12 16:03:17.167965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.485 [2024-07-12 16:03:17.167990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.485 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.168107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.168145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.168300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.168324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.168507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.168531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.168637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.168661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.168772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.168801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.168899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.168924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.169086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.169124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.169287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.169330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.169469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.169493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.169674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.169699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.169818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.169843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.169951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.169976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.170076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.170099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.170261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.170299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.170436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.170460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.170646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.170671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.170794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.170819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.170906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.170930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.171056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.171079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.171278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.171302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.171433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.171456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.171609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.171642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.171816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.171842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.171975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.172001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.172153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.172177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.172314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.172338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.172488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.172512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.172620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.172645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.172803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.172828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.172931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.172956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.173078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.173101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.173275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.486 [2024-07-12 16:03:17.173302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.486 qpair failed and we were unable to recover it. 00:26:20.486 [2024-07-12 16:03:17.173431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.173455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.173599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.173624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.173772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.173798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.173899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.173923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.174054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.174077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.174238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.174261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.174434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.174472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.174603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.174642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.174802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.174827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.174969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.174993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.175120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.175157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.175328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.175352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.175514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.175538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.175712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.175762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.175869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.175908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.176053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.176077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.176205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.176228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.176357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.176380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.176576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.176601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.176702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.176768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.176874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.176898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.177071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.177108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.177268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.177290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.177481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.177505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.177669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.177693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.177840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.177882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.178016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.178046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.178209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.178237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.178358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.178382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.178570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.178594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.178730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.178779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.178877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.178900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.179008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.179032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.179158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.179181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.179350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.179372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.179509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.179547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.179662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.179686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.179834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.179858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.179964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.179990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.180158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.180181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.180329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.180352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.487 [2024-07-12 16:03:17.180474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.487 [2024-07-12 16:03:17.180498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.487 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.180658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.180695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.180828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.180851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.180993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.181018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.181156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.181179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.181338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.181375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.181513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.181538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.181689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.181723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.181844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.181868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.181992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.182032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.182202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.182225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.182339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.182376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.182569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.182593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.182759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.182787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.182953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.182976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.183148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.183171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.183314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.183358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.183464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.183487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.183621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.183645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.183821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.183853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.183983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.184007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.184187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.184210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.184323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.184362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.184522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.184571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.184768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.184806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.184911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.184955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.185129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.185179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.185362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.185386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.185522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.185552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.185701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.185744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.185868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.185923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.186025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.186068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.186257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.186306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.186404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.186438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.186610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.186637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.186798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.186822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.186919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.186944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.187075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.187099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.187245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.187282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.187419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.187443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.187641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.187664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.488 [2024-07-12 16:03:17.187814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.488 [2024-07-12 16:03:17.187838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.488 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.187964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.187989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.188155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.188179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.188349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.188382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.188556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.188594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.188750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.188775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.188897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.188942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.189086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.189119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.189259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.189283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.189429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.189452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.189639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.189662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.189832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.189881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.190001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.190029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.190171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.190216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.190361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.190383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.190580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.190604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.190744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.190767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.190892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.190939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.191031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.191055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.191149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.191173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.191371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.191409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.191553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.191576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.191714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.191761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.191869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.191894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.192029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.192067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.192223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.192247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.192382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.192430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.192617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.192641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.192836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.192860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.192939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.192963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.193111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.193134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.193320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.193371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.193478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.193502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.193646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.193670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.193798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.193835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.193970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.194017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.194124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.194173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.194328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.194367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.194540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.194563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.194760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.194789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.194935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.194980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.195126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.489 [2024-07-12 16:03:17.195173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.489 qpair failed and we were unable to recover it. 00:26:20.489 [2024-07-12 16:03:17.195311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.195362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.195471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.195495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.195641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.195665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.195802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.195850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.196042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.196088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.196277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.196322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.196508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.196532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.196724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.196769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.196949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.197006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.197174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.197225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.197391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.197445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.197605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.197629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.197777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.197833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.197960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.198000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.198145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.198195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.198305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.198365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.198555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.198579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.198789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.198812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.198919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.198943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.199093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.199117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.199267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.199304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.199453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.199488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.199628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.199676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.199804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.199829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.199971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.200023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.200164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.200211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.200367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.490 [2024-07-12 16:03:17.200391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.490 qpair failed and we were unable to recover it. 00:26:20.490 [2024-07-12 16:03:17.200500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.200524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.200659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.200683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.200884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.200932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.201112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.201157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.201319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.201342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.201471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.201495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.201670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.201709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.201842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.201866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.201987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.202012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.202193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.202217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.202443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.202467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.202659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.202682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.202828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.202878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.203026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.203073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.203266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.203314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.203500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.203523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.203696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.203752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.203875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.203918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.204037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.204076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.204239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.204288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.204418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.204468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.204644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.204666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.204854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.204906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.205022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.205071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.205216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.205270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.205452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.205475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.205647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.205670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.205824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.205883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.206015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.206060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.206209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.206261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.206367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.206390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.206562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.491 [2024-07-12 16:03:17.206595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.491 qpair failed and we were unable to recover it. 00:26:20.491 [2024-07-12 16:03:17.206768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.206805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.206975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.207030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.207219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.207268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.207475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.207499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.207634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.207656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.207849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.207875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.208059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.208118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.208278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.208330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.208434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.208458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.208627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.208658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.208902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.208953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.209134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.209185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.209339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.209387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.209581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.209605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.209756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.209779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.209921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.209980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.210155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.210238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.210427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.210479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.210637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.210661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.210858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.210908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.211071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.211131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.211314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.211365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.211539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.211562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.211745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.211769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.211923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.211976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.212149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.212198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.212318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.212382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.212571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.212594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.212784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.212821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.213021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.213071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.213261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.213312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.213499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.213547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.213694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.213717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.213876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.213930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.214075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.214126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.214304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.214353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.214506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.214530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.214716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.214746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.214911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.214974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.215157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.215207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.215328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.215392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.492 qpair failed and we were unable to recover it. 00:26:20.492 [2024-07-12 16:03:17.215588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.492 [2024-07-12 16:03:17.215611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.215820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.215875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.216053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.216105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.216231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.216289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.216436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.216473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.216628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.216665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.216819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.216859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.217000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.217055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.217236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.217289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.217414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.217446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.217617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.217667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.217790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.217857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.218032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.218090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.218264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.218315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.218488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.218511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.218612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.218635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.218773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.218797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.218939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.218988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.219166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.219216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.219388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.219415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.219558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.219604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.219750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.219774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.219938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.219990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.220194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.220240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.220364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.220388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.220523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.220546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.220679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.220702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.220958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.220996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.221158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.221183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.221362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.221387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.221507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.221545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.221676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.221701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.221834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.221859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.222040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.222093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.222274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.222325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.222478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.222530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.222730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.222780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.222890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.222918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.223122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.223177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.223308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.223361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.223577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.223629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.493 [2024-07-12 16:03:17.223795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.493 [2024-07-12 16:03:17.223821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.493 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.223937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.223990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.224139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.224189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.224329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.224352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.224501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.224525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.224697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.224757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.224922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.224947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.225119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.225142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.225363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.225412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.225566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.225589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.225734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.225793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.225943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.225991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.226143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.226197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.226418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.226467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.226609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.226633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.226796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.226836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.227003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.227026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.227174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.227197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.227378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.227401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.227572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.227595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.227750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.227775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.227929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.227954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.228102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.228125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.228270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.228354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.228488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.228511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.228651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.228675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.228826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.228877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.229032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.229121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.229286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.229334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.229466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.229503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.229595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.229619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.229792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.229856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.229988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.230051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.230236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.230259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.230441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.230464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.230574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.230598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.230715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.230751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.230879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.230904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.231036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.231060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.231255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.231307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.231468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.231494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.231661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.494 [2024-07-12 16:03:17.231685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.494 qpair failed and we were unable to recover it. 00:26:20.494 [2024-07-12 16:03:17.231826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.231852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.232025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.232050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.232199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.232223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.232438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.232489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.232717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.232761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.232895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.232958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.233204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.233254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.233440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.233492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.233681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.233705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.233898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.233949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.234150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.234199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.234332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.234394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.234575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.234597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.234757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.234780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.234961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.235012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.235187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.235240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.235411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.235462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.235625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.235648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.235780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.235806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.236022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.236081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.236250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.236298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.236464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.236496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.236693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.236730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.236949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.237002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.237194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.237237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.237387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.237436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.237594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.237617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.237800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.237857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.238038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.238086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.238227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.238284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.238464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.495 [2024-07-12 16:03:17.238488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.495 qpair failed and we were unable to recover it. 00:26:20.495 [2024-07-12 16:03:17.238699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.238725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.238913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.238969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.239223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.239270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.239446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.239510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.239675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.239710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.239918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.239974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.240194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.240247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.240512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.240563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.240766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.240790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.241011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.241062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.241265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.241314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.241519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.241573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.241710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.241755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.241946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.241998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.242163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.242206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.242398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.242444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.242570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.242593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.242780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.242846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.243032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.243081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.243269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.243321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.243548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.243598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.243780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.243805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.243964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.244019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.244230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.244281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.244471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.244515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.244709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.244732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.244891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.244943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.245098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.245154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.245347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.245395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.245556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.245591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.245754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.245807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.245990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.246040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.246231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.246282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.246440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.246489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.246619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.246656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.246909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.246960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.247206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.247254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.247397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.247448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.247677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.247701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.247844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.247931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.248063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.248125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.496 qpair failed and we were unable to recover it. 00:26:20.496 [2024-07-12 16:03:17.248299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.496 [2024-07-12 16:03:17.248346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.248551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.248574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.248723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.248755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.248987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.249041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.249233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.249281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.249438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.249488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.249661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.249689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.249935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.249988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.250211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.250263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.250408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.250459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.250624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.250646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.250786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.250846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.251016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.251073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.251230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.251282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.251443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.251496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.251646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.251669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.251865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.251918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.252050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.252103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.252209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.252263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.252401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.252438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.252548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.252571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.252750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.252775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.253003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.253058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.253232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.253282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.253519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.253566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.253749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.253773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.253905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.253942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.254088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.254146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.254307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.254358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.254508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.254555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.254687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.254723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.254894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.254953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.255146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.255169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.255327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.255350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.255527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.255550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.255717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.255760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.255972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.256020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.256213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.256270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.256430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.256480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.256664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.256687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.256868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.497 [2024-07-12 16:03:17.256929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.497 qpair failed and we were unable to recover it. 00:26:20.497 [2024-07-12 16:03:17.257081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.257133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.257287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.257338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.257520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.257570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.257776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.257805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.257977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.258029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.258246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.258296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.258526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.258577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.258693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.258715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.258863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.258933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.259130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.259178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.259357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.259404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.259573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.259596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.259743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.259767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.259996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.260071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.260258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.260307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.260542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.260594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.260842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.260893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.261080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.261134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.261328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.261379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.261577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.261600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.261843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.261895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.262068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.262122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.262288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.262337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.262518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.262541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.262708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.262746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.262958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.263010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.263181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.263238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.263395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.263445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.263606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.263629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.263780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.263872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.264054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.264105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.264240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.264295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.264472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.264494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.264674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.264698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.264900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.264952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.265142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.265192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.265419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.265468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.265608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.265631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.265822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.265871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.266108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.266160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.266351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.498 [2024-07-12 16:03:17.266415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.498 qpair failed and we were unable to recover it. 00:26:20.498 [2024-07-12 16:03:17.266590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.266614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.266766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.266829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.267065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.267115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.267309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.267367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.267600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.267623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.267831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.267856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.268035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.268095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.268227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.268283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.268493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.268516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.268682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.268705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.268877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.268929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.269179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.269229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.269413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.269478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.269688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.269712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.269903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.269953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.270193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.270242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.270441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.270498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.270677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.270700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.270917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.270964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.271188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.271238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.271373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.271424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.271540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.271564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.271690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.271728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.271979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.272037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.272242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.272293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.272458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.272509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.272702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.272730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.272909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.272968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.273106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.273167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.273416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.273468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.273691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.273715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.273858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.273911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.274136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.274191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.274384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.274427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.274607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.499 [2024-07-12 16:03:17.274630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.499 qpair failed and we were unable to recover it. 00:26:20.499 [2024-07-12 16:03:17.274779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.274802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.275057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.275108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.275257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.275304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.275511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.275552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.275744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.275768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.275894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.275963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.276125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.276177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.276359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.276408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.276618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.276641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.276828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.276853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.277039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.277090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.277205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.277259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.277473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.277522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.277697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.277735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.277935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.277983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.278179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.278229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.278427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.278478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.278617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.278640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.278854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.278902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.279059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.279107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.279289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.279340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.279529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.279554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.279787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.279812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.280002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.280054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.280233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.280280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.280525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.280574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.280788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.280812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.281002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.281062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.281200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.281254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.281398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.281450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.281558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.281582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.281838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.281908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.282127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.282182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.282361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.282410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.282611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.282635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.282775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.282798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.282985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.283042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.283241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.283289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.283464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.283511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.283709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.283754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.283959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.500 [2024-07-12 16:03:17.284008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.500 qpair failed and we were unable to recover it. 00:26:20.500 [2024-07-12 16:03:17.284175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.284232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.284460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.284507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.284676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.284700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.284926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.284978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.285157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.285208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.285455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.285506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.285648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.285671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.285849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.285898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.286081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.286132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.286290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.286338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.286547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.286571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.286725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.286770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.286960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.287006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.287204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.287253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.287406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.287458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.287624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.287647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.287773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.287798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.287993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.288047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.288234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.288287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.288473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.288525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.288746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.288785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.289013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.289061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.289240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.289291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.289535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.289583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.289764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.289831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.290012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.290065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.290221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.290276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.290449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.290500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.290653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.290675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.290787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.290812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.291039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.291087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.291252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.291302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.291530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.291579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.291771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.291816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.292045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.292092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.292277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.292327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.292498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.292547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.292694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.292717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.501 [2024-07-12 16:03:17.292910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.501 [2024-07-12 16:03:17.292964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.501 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.293182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.293230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.293350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.293393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.293588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.293611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.293770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.293798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.293979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.294030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.294183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.294226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.294388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.294446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.294635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.294658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.294804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.294844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.295005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.295061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.295236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.295291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.295502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.295526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.295751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.295774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.295964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.296014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.296190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.296234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.296394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.296442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.296607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.296631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.296802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.296841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.296992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.297045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.297178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.297233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.297420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.297473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.297631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.297665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.297814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.297901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.298112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.298135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.298322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.298373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.298543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.298565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.298763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.298788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.298964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.299026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.299244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.299293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.299508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.299558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.299687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.299725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.299920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.299977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.300117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.300168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.300306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.300359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.300492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.300530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.300696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.300733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.300846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.300869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.300979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.301003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.301167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.301190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.301362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.301385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.301477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.502 [2024-07-12 16:03:17.301500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.502 qpair failed and we were unable to recover it. 00:26:20.502 [2024-07-12 16:03:17.301620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.301645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.301832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.301884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.301997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.302034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.302122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.302146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.302281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.302304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.302475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.302513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.302660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.302683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.302788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.302811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.302942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.302967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.303134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.303171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.303353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.303375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.303487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.303511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.303630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.303653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.303847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.303872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.303969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.303992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.304127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.304176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.304292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.304329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.304455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.304479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.304650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.304673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.304831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.304897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.305092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.305144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.305260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.305327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.305499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.305522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.305650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.305673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.305832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.305857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.306047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.306097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.306272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.306320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.306491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.306514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.306622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.306646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.306805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.306873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.307045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.307069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.307230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.307295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.307406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.307429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.503 [2024-07-12 16:03:17.307579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.503 [2024-07-12 16:03:17.307606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.503 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.307731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.307761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.307925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.307949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.308151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.308174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.308349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.308398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.308531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.308554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.308730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.308773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.308908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.308931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.309108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.309132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.309273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.309295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.309447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.309505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.309650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.309673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.309867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.309926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.310050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.310101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.310263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.310314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.310484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.310519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.310754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.310778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.310957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.311008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.311157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.311210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.311361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.311407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.311572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.311594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.311759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.311784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.311923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.311971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.312124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.312186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.312320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.312377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.312524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.312547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.312655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.312679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.312849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.312913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.313061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.313112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.313222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.313260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.313426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.313449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.313592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.313616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.313761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.313786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.313907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.313932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.314045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.314068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.314156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.314179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.314296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.314319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.314467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.314491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.314634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.314657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.314808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.314833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.314956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.314981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.315123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.504 [2024-07-12 16:03:17.315152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.504 qpair failed and we were unable to recover it. 00:26:20.504 [2024-07-12 16:03:17.315278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.315307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.315446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.315469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.315591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.315614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.315746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.315771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.315912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.315937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.316070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.316109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.316199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.316221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.316340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.316363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.316528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.316551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.316672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.316695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.316816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.316839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.317003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.317027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.317172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.317199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.317330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.317354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.317487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.317512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.317617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.317652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.317820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.317846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.317984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.318016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.318156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.318194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.318317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.318341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.318483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.318507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.318629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.318654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.318834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.318883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.319036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.319062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.319213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.319246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.319415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.319438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.319611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.319636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.319794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.319820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.319921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.319947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.320103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.320127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.320268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.320306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.320552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.320602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.320780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.320812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.320940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.320979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.321157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.321209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.321360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.321417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.321577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.321600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.321769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.321798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.321932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.321973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.322131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.322188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.322336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.322384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.505 [2024-07-12 16:03:17.322511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.505 [2024-07-12 16:03:17.322535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.505 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.322702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.322747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.322849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.322891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.323027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.323068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.323225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.323278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.323377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.323400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.323569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.323592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.323715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.323761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.323868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.323910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.324011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.324051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.324189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.324226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.324345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.324368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.324467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.324490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.324618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.324643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.324776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.324801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.324902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.324926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.325094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.325130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.325299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.325322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.325457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.325480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.325640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.325678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.325803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.325845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.326045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.326100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.326254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.326306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.326428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.326465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.326592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.326615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.326757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.326782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.326881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.326909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.327099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.327163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.327260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.327298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.327454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.327477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.327592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.327615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.327782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.327807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.327938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.327977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.328121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.328144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.328312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.328349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.328513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.328536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.328632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.328654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.328801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.328829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.328976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.329000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.329164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.329228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.329365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.329403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.329531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.506 [2024-07-12 16:03:17.329554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.506 qpair failed and we were unable to recover it. 00:26:20.506 [2024-07-12 16:03:17.329745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.329770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.329883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.329906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.330037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.330060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.330232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.330255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.330389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.330412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.330529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.330551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.330676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.330699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.330881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.330906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.331066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.331089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.331275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.331333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.331502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.331525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.331702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.331725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.331842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.331866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.332001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.332043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.332193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.332233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.332359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.332386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.332506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.332530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.332641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.332664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.332802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.332827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.332961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.332985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.333119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.333143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.333318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.333341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.333474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.333498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.333608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.333630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.333751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.333779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.333908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.333932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.334058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.334081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.334250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.334272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.334402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.334426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.334571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.334610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.334705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.334749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.334899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.334924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.335045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.335068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.335203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.335227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.335355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.335379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.335541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.335593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.335773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.335800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.335909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.335935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.507 [2024-07-12 16:03:17.336066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.507 [2024-07-12 16:03:17.336106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.507 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.336250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.336274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.336410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.336449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.336570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.336594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.336757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.336781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.336918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.336943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.337113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.337166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.337305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.337353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.337453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.337481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.337627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.337651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.337750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.337774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.337874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.337897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.338005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.338059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.338240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.338312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.338470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.338532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.338685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.338708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.338839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.338866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.339044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.339068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.339233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.339292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.339441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.339491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.339660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.339684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.339826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.339865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.340028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.340051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.340141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.340178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.340347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.340399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.340519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.340542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.340673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.340696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.340845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.340898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.341029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.341055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.341161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.341186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.341316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.341344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.341482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.341537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.341661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.341689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.341841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.341866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.341970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.341993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.342115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.342143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.342289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.342317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.342433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.342461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.508 [2024-07-12 16:03:17.342553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.508 [2024-07-12 16:03:17.342581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.508 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.342752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.342804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.342987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.343024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.343196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.343226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.343379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.343407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.343555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.343582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.343747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.343790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.343916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.343942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.344112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.344135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.344284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.344347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.344576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.344640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.344865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.344891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.345013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.345052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.345190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.345235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.345450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.345513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.345696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.345791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.345882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.345906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.346039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.346062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.346206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.346273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.346520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.346583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.346806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.346832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.346952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.346976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.347138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.347166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.347347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.347410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.347611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.347674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.347851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.347875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.348039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.348062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.348188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.348212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.348353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.348416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.348641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.348704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.348923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.348947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.349079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.349127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.349356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.349420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.349670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.349733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.349899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.349924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.350074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.350102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.350220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.350258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.350382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.350406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.350564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.350642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.350880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.350905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.351024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.351073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.509 [2024-07-12 16:03:17.351332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.509 [2024-07-12 16:03:17.351395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.509 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.351658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.351721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.351913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.351937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.352092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.352119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.352285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.352307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.352488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.352551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.352755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.352798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.352926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.352951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.353092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.353116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.353322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.353385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.353612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.353675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.353892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.353917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.354034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.354072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.354213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.354250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.354369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.354392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.354636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.354699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.354939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.354984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.355161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.355224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.355417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.355481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.355696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.355749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.355944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.355989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.356178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.356241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.356486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.356531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.356703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.356783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.357001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.357046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.357287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.357332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.357508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.357570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.357772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.357843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.358027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.358076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.358230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.358293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.358522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.358585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.358827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.358877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.359076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.359140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.359333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.359397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.359608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.359655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.359852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.359900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.360078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.360141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.360331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.360379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.360547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.360624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.360854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.360903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.361121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.361171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.510 qpair failed and we were unable to recover it. 00:26:20.510 [2024-07-12 16:03:17.361331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.510 [2024-07-12 16:03:17.361403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.361634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.361698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.361960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.362011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.362215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.362277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.362499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.362562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.362787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.362839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.362989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.363060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.363271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.363333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.363541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.363592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.363785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.363860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.364059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.364131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.364340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.364391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.364597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.364660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.364937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.364988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.365187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.365241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.365422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.365485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.365716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.365793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.366022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.366097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.366309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.366363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.366554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.366618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.366854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.366910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.367135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.367198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.367436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.367491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.367688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.367764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.367973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.368028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.368260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.368323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.368567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.368621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.368854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.368909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.369118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.369181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.369380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.369443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.369697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.369780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.369973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.370051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.370276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.370339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.370569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.370631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.370892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.370951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.371155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.371218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.371420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.371482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.371679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.371758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.372002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.372061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.372265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.372328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.372554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.372627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.511 qpair failed and we were unable to recover it. 00:26:20.511 [2024-07-12 16:03:17.372878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.511 [2024-07-12 16:03:17.372938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.373175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.373238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.373469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.373532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.373769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.373833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.374061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.374124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.374326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.374389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.374598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.374661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.374886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.374950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.375147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.375210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.375415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.375477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.375675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.375755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.375954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.376018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.376282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.376346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.376583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.376645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.376894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.376958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.377192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.377255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.377480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.377543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.377776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.377840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.378073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.378136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.378338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.378401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.378633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.378696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.378940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.379003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.379240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.379302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.379501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.379564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.379779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.379844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.380071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.380134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.380339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.380402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.380595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.380658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.380899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.380962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.381193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.381257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.381461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.381525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.381797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.381840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.382060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.382124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.382321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.382384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.512 [2024-07-12 16:03:17.382607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.512 [2024-07-12 16:03:17.382670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.512 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.382918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.382984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.383213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.383275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.383483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.383547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.383762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.383827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.384055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.384129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.384359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.384422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.384650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.384714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.384966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.385030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.385227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.385291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.385524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.385588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.385792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.385858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.386091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.386155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.386354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.386417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.386630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.386694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.386936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.386999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.387193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.387255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.387454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.387518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.387725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.387801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.388047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.388110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.388335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.388398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.388600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.388664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.388905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.388969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.389195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.389259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.389487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.389549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.389793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.389858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.390087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.390151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.390350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.390414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.390639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.390702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.513 [2024-07-12 16:03:17.390946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.513 [2024-07-12 16:03:17.391009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.513 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.391210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.391273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.391506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.391569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.391807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.391872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.392070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.392133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.392330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.392393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.392595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.392657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.392907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.392971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.393210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.393272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.393500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.393563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.393793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.393858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.394085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.394149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.394372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.394435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.394632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.394695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.394934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.394998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.395226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.395289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.395518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.395590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.395791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.395857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.396054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.396118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.396351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.396415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.396637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.396700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.396919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.396983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.397187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.397251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.397430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.397494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.397705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.397795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.398025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.398088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.398291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.398354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.398548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.398612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.398823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.398888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.399122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.399187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.399397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.399461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.399654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.399718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.399961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.400025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.400222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.400286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.400517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.400580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.400778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.514 [2024-07-12 16:03:17.400843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.514 qpair failed and we were unable to recover it. 00:26:20.514 [2024-07-12 16:03:17.401037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.401102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.401323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.401386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.401615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.401679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.401930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.401994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.402206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.402270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.402469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.402531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.402725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.402803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.403051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.403114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.403339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.403402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.403577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.403641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.403883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.403946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.404168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.404231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.404403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.404468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.404697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.404778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.404980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.405044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.405249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.405312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.405546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.405609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.405855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.405921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.406131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.406196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.406393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.406456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.406689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.406778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.406990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.407054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.407256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.407321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.407526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.407589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.407817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.407881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.408110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.408173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.408370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.408434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.408658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.408721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.408947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.409010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.409215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.409278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.409484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.409546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.409711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.409791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.410020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.410083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.410282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.410344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.410590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.515 [2024-07-12 16:03:17.410653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.515 qpair failed and we were unable to recover it. 00:26:20.515 [2024-07-12 16:03:17.410899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.410963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.411205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.411268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.411494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.411557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.411770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.411835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.412061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.412125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.412322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.412386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.412612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.412675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.412921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.412985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.413183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.413246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.413469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.413532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.413799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.413863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.414065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.414128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.414339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.414404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.414627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.414690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.414882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.414946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.415174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.415236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.415474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.415537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.415769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.415833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.416033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.416096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.416320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.416383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.416610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.416673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.416919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.416981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.417180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.417243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.417450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.417514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.417753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.417816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.418021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.418094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.418299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.418362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.418561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.418625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.418850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.418916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.419114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.419177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.419383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.516 [2024-07-12 16:03:17.419446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.516 qpair failed and we were unable to recover it. 00:26:20.516 [2024-07-12 16:03:17.419651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.419714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.419966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.420029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.420209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.420272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.420499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.420562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.420733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.420807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.421021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.421084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.421310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.421373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.421541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.421603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.421826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.421891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.422095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.422158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.422355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.422418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.422610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.422673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.422929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.422994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.423217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.423279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.423514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.423578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.423798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.423864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.424087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.424151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.424386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.424449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.424674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.424750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.424957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.425021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.425192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.425255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.425499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.425561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.517 [2024-07-12 16:03:17.425796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.517 [2024-07-12 16:03:17.425861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.517 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.426090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.426153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.426377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.426439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.426664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.426727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.426969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.427032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.427232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.427294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.427469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.427532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.427768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.427832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.428056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.428119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.428317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.428380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.428614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.428677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.428914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.428979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.429209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.429285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.429521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.429584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.429812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.429877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.430107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.430170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.430366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.430428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.430654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.430717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.430934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.430997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.431194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.431258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.431495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.431559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.431794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.431858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.432057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.432120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.432344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.432407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.432612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.432675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.432911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.432976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.433224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.433288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.433492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.433555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.433771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.433835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.434033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.434096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.434324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.434387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.434614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.434678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.434865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.434929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.435124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.435187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.518 qpair failed and we were unable to recover it. 00:26:20.518 [2024-07-12 16:03:17.435414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.518 [2024-07-12 16:03:17.435477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.435649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.435712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.435932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.435995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.436197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.436260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.436485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.436548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.436780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.436845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.437074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.437136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.437331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.437393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.437615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.437678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.437924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.437988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.438216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.438279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.438508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.438571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.438772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.438836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.439060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.439124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.439321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.439385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.439623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.439687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.439951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.440015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.440217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.440281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.440455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.440526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.440791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.440856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.441096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.441161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.441386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.441449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.441677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.441757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.441988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.442051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.442260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.442323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.442519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.442583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.442808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.442874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.443101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.443164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.443346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.443410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.443602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.443666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.443881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.443946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.519 qpair failed and we were unable to recover it. 00:26:20.519 [2024-07-12 16:03:17.444143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.519 [2024-07-12 16:03:17.444207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.444447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.444511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.444702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.444781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.444959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.445023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.445246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.445309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.445542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.445606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.445838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.445905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.446112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.446176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.446384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.446447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.446674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.446756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.446990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.447054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.447278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.447341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.447566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.447630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.447841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.447907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.448147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.448211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.448438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.448502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.448724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.448816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.449020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.449083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.449250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.449313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.449546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.449610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.449832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.449898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.450098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.450161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.450356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.450419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.450646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.450709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.450929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.450992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.451219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.451283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.451478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.451542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.451779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.451852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.452080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.452143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.452374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.452439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.452639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.452702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.520 [2024-07-12 16:03:17.452967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.520 [2024-07-12 16:03:17.453032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.520 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.453210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.453273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.453470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.453534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.453758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.453824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.454030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.454094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.454291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.454355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.454551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.454614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.454814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.454880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.455102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.455166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.455391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.455454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.455668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.455733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.455979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.456042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.456269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.456332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.456558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.456621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.456849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.456914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.457107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.457170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.457391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.457455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.457678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.457754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.457989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.458053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.458286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.458349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.458543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.458605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.458802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.458867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.459071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.459135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.459312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.459376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.459553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.459615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.459810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.459875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.460086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.460150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.460372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.460435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.460661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.460724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.460989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.461053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.461260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.461324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.461521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.461584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.461782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.461848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.462074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.462138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.521 [2024-07-12 16:03:17.462337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.521 [2024-07-12 16:03:17.462401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.521 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.462618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.462682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.462922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.462995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.463199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.463261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.463494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.463558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.463778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.463843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.464041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.464106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.464334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.464396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.464598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.464662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.464927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.464994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.465232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.465293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.465468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.465531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.465768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.465833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.466056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.466119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.466289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.466352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.466560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.466623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.466871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.466936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.467135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.467198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.467396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.467459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.467630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.467693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.467937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.468001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.468169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.468231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.468464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.468527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.468723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.468802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.468969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.469031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.469256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.469319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.469544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.469607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.469807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.469871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.470097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.470160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.470403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.470467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.470658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.470720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.470970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.471034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.471236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.471298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.471522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.471585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.522 [2024-07-12 16:03:17.471815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.522 [2024-07-12 16:03:17.471881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.522 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.472079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.472142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.472317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.472381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.472608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.472672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.472901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.472965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.473168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.473231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.473457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.473520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.473769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.473834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.474062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.474135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.474361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.474424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.474649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.474712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.474945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.475009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.475234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.475297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.475521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.475584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.475820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.475886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.476115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.476178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.476418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.476481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.476709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.476788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.477016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.477079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.477248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.477311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.477535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.477598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.477805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.477870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.478079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.478143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.478344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.478407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.478640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.478703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.478902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.478966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.479196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.479259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.479483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.479546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.479791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.479857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.480080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.480142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.480368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.480431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.480630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.480693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.480954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.481018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.481225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.481288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.481517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.481580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.481825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.481889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.482115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.523 [2024-07-12 16:03:17.482177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.523 qpair failed and we were unable to recover it. 00:26:20.523 [2024-07-12 16:03:17.482406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.482469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.482694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.482771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.482997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.483060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.483282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.483345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.483539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.483602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.483800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.483866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.484099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.484162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.484358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.484422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.484621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.484685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.484925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.484989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.485215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.485278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.485508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.485580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.485805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.485869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.486042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.486106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.486308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.486371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.486544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.486607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.486801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.486866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.487100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.487126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.487343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.487407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.487641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.487704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.487962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.488026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.488227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.488290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.488514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.488578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.488804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.488856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.489054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.489117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.489353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.489416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.489634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.489697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.489944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.490008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.490207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.490270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.524 [2024-07-12 16:03:17.490494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.524 [2024-07-12 16:03:17.490557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.524 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.490765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.490830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.491032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.491095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.491292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.491355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.491578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.491642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.491889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.491953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.492183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.492247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.492470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.492534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.492768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.492833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.493068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.493132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.493360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.493423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.493657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.493720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.493968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.494032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.494258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.494320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.494556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.494620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.494842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.494908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.495135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.495198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.495425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.495488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.495689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.495766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.495967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.496030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.496233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.496296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.496518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.496581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.496819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.496884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.497095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.497159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.497339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.497403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.497627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.497690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.497905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.497968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.498194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.498258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.525 qpair failed and we were unable to recover it. 00:26:20.525 [2024-07-12 16:03:17.498486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.525 [2024-07-12 16:03:17.498549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.498769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.498834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.499034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.499098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.499324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.499387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.499589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.499653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.499894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.499958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.500154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.500217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.500449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.500513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.500772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.500837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.501061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.501124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.501344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.501407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.501636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.501699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.501912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.501975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.502173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.502235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.502433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.502497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.502699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.502775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.503004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.503067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.503290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.503354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.503576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.503639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.503886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.503950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.504152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.504214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.504438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.504511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.504790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.504855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.505057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.505121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.505349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.505413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.505622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.505684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.505938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.506003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.506197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.506259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.506454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.506517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.506761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.506826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.507036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.507099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.526 qpair failed and we were unable to recover it. 00:26:20.526 [2024-07-12 16:03:17.507306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.526 [2024-07-12 16:03:17.507369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.507596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.507658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.507905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.507969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.508176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.508239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.508477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.508541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.508773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.508838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.509069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.509133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.509356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.509418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.509648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.509711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.509941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.510007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.510237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.510309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.510552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.510617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.510847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.510912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.511142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.511205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.511405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.511467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.511670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.511735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.511954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.512017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.512259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.512323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.512546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.512608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.512810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.512875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.513083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.513146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.513372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.513434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.513638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.513700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.513956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.514021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.514216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.514279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.514505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.514567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.514791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.514855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.515060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.515124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.515323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.515385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.515621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.515685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.515899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.515972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.516201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.516265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.516463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.516525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.527 [2024-07-12 16:03:17.516735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.527 [2024-07-12 16:03:17.516823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.527 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.517051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.517115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.517337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.517400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.517602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.517665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.517931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.517995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.518224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.518288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.518511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.518574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.518755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.518820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.519046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.519109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.519335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.519398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.519625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.519688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.519917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.519982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.520199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.520262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.520460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.520523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.520784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.520849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.521056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.521120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.521304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.521369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.521534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.521587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.521773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.521839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.522065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.522129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.522352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.522414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.522640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.522702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.522942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.523006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.523231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.523295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.523509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.523572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.523796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.523861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.524095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.524159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.528 qpair failed and we were unable to recover it. 00:26:20.528 [2024-07-12 16:03:17.524369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.528 [2024-07-12 16:03:17.524433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.524658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.524720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.524939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.525002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.525200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.525263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.525463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.525525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.525722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.525802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.526018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.526081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.526311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.526375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.526600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.526663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.526909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.526973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.527180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.527253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.527455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.527518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.527719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.527800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.527997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.528060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.528236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.528299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.528523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.528585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.528810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.528875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.529088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.529151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.529379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.529442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.529662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.529725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.529971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.530035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.530234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.530297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.530493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.530556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.530789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.530855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.531079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.531142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.531367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.531431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.531637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.531700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.531943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.532006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.532240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.532304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.532502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.532566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.532790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.532817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.533021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.533084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.533279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.533342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.529 qpair failed and we were unable to recover it. 00:26:20.529 [2024-07-12 16:03:17.533542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.529 [2024-07-12 16:03:17.533605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.533801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.533865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.534070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.534132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.534357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.534420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.534667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.534730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.534922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.534985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.535196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.535259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.535483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.535546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.535770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.535833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.536037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.536101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.536306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.536369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.536593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.536656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.536916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.536981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.537207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.537271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.537494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.537557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.537785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.537852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.538085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.538148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.538353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.538426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.538663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.538726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.538952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.539015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.539242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.539305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.539528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.539591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.539822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.539886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.540083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.540146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.540371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.540435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.540633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.540696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.540911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.540975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.541201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.541264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.541428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.541492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.541725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.541803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.530 [2024-07-12 16:03:17.542033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.530 [2024-07-12 16:03:17.542097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.530 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.542336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.542400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.542627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.542690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.542904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.542967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.543201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.543265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.543467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.543530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.543769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.543832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.544058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.544122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.544350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.544414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.544635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.544698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.544964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.545028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.545251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.545314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.545509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.545572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.545804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.545870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.546051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.546115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.546339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.546402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.546628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.546692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.546904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.546969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.547167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.547230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.547457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.547520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.547751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.547817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.548016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.548079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.548306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.548369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.548595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.548658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.548846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.548910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.549105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.549169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.549409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.549472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.549697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.549788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.549988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.550052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.550253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.550316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.550542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.550606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.550778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.550843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.551079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.551142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.551338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.551401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.551610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.551673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.551913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.551977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.552199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.552262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.552485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.552547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.552786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.552851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.553074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.553137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.553372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.553434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.553673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.553755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.553986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.554050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.554248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.554311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.554532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.554594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.554785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.554851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.555055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.555118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.531 [2024-07-12 16:03:17.555297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.531 [2024-07-12 16:03:17.555360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.531 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.555559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.555623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.555835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.555899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.556130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.556194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.556392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.556454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.556691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.556765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.557001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.557064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.557305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.557368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.557596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.557658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.557900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.557965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.558190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.558252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.558473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.558536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.558735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.558816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.559043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.559105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.559331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.559394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.559593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.559657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.559900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.559965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.560172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.560236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.560409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.560471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.560668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.560731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.561008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.561081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.561314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.561378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.561572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.561634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.561847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.561913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.562107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.562171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.562398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.562461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.562686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.562757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.562982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.563045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.563246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.563309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.563535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.563598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.563793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.563858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.564057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.564120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.564325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.564388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.564589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.564652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.564885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.564950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.565149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.565213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.565439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.565502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.565724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.565799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.566028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.566091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.566325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.566387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.532 [2024-07-12 16:03:17.566555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.532 [2024-07-12 16:03:17.566617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.532 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.566844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.566909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.567133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.567197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.567424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.567487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.567713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.567793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.568020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.568084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.568280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.568342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.568591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.568653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.568891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.568956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.569156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.569219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.569441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.569503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.569729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.569812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.570008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.570070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.570296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.570359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.570583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.570646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.570862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.570927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.571156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.571219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.571387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.571450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.571675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.571756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.571930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.571993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.572220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.572292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.572491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.572553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.572761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.572825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.573052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.573115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.573346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.573409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.573643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.573706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.573904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.573958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.574099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.574134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.574266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.574301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.574483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.574546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.574772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.574837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.575039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.575102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.575271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.575334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.575531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.575594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.575782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.575847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.576053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.576117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.576314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.576376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.576601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.576664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.576925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.576990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.577222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.577285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.577509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.577572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.577777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.577841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.578066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.578129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.578294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.578358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.578564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.578628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.578871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.578937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.579143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.579206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.579442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.533 [2024-07-12 16:03:17.579505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.533 qpair failed and we were unable to recover it. 00:26:20.533 [2024-07-12 16:03:17.579702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.579784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.580019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.580082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.580278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.580341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.580574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.580637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.580907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.580973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.581202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.581264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.581462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.581525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.581772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.581838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.582004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.582067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.582239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.582303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.582512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.582575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.582773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.582837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.583033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.583104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.583331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.583394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.583579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.583653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.583901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.583966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.584160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.584222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.584445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.584508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.584730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.584806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.585031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.585094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.585325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.585388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.585609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.585671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.585856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.585920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.586097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.586160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.586383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.586445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.586621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.586684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.586923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.586987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.587189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.587252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.587461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.587524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.587768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.587832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.588032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.588095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.588328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.588391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.588617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.588680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.588917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.588982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.589174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.589237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.589466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.589529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.589731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.589812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.589989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.590052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.590290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.590353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.590588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.590651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.590868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.590932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.591159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.591223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.591394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.591457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.591682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.591759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.591997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.592060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.592286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.592350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.592553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.534 [2024-07-12 16:03:17.592615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.534 qpair failed and we were unable to recover it. 00:26:20.534 [2024-07-12 16:03:17.592832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.592898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.593099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.593162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.593403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.593467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.593693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.593767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.593971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.594034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.594261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.594334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.594542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.594605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.594784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.594848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.595076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.595139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.595371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.595434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.595657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.595720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.595939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.596003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.596150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.596214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.596414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.596477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.596712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.596804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.597010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.597074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.597301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.597364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.597588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.597651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.597901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.597966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.598207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.598270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.598478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.598541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.598771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.598835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.599062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.599126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.599352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.599416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.599623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.599685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.599863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.599928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.600133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.600196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.600420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.600482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.600702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.600780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.601007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.601069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.601235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.601298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.601522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.601585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.601795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.601859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.602064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.602127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.602350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.602413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.602640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.602703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.602918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.602982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.603148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.603210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.603445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.603508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.603705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.603786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.603997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.604060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.604256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.604319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.604513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.604576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.604802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.604867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.605100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.605164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.605389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.605462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.605684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.605758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.535 [2024-07-12 16:03:17.605949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.535 [2024-07-12 16:03:17.606013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.535 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.606209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.606272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.606442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.606504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.606701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.606776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.607004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.607069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.607299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.607361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.607557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.607620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.607820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.607884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.608081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.608144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.608312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.608375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.608601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.608663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.608916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.608980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.609218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.609281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.609515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.609577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.609780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.609845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.610051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.610114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.610314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.610376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.610577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.610640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.610849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.610914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.611138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.611201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.611428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.611492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.611713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.611787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.611988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.612050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.612279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.612343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.612574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.612636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.612878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.612943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.613173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.613236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.613433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.613496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.613696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.613777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.614049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.614113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.614281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.614343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.614567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.614630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.614870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.614934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.615131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.615194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.615429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.615492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.615717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.615797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.615992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.616055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.616251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.616314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.616543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.616615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.616816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.616879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.617078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.617142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.617364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.617427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.536 [2024-07-12 16:03:17.617636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.536 [2024-07-12 16:03:17.617691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.536 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.617932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.617996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.618197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.618260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.618462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.618525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.618759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.618823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.619034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.619097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.619324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.619389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.619626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.619688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.619900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.619963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.620193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.620257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.620471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.620534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.620759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.620824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.621024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.621086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.621291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.621354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.621546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.621608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.621811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.621876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.622096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.622159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.622386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.622449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.622643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.622705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.622947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.623011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.623233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.623296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.623497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.623560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.623796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.623860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.624066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.624129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.624297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.624360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.624556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.624619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.624826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.624890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.625093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.625156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.625360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.625423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.625644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.625706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.625953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.626016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.626185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.626248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.626411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.626473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.626675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.626755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.626991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.627054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.627279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.627342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.627537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.627608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.627837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.627902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.628133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.628195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.628407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.628470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.628697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.628773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.628976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.629039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.629240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.629303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.629524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.629587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.629785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.629848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.630074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.630136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.537 [2024-07-12 16:03:17.630364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.537 [2024-07-12 16:03:17.630427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.537 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.630649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.630710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.630927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.630990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.631163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.631226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.631433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.631497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.631724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.631803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.632000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.632063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.632289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.632351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.632558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.632620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.632857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.632922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.633090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.633152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.633372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.633436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.633629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.633691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.633901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.633964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.634192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.634254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.634463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.634526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.634758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.634822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.635057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.635121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.635356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.635418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.635642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.635705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.635958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.636020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.636246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.636308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.636508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.636571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.636801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.636865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.637065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.637128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.637353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.637416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.637641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.637704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.637890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.637954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.638178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.638240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.638408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.638470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.638666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.638729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.638993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.639056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.639281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.639343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.639540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.639604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.639775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.639840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.640036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.640099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.640328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.640391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.640596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.640659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.640926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.640991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.641191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.641254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.641477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.641541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.641773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.641839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.642067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.642129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.642351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.642414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.642654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.642717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.642970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.643034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.643255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.643317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.643543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.643606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.538 qpair failed and we were unable to recover it. 00:26:20.538 [2024-07-12 16:03:17.643825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.538 [2024-07-12 16:03:17.643889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.644120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.644183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.644386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.644449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.644671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.644734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.644955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.645018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.645222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.645284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.645506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.645568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.645770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.645835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.646060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.646122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.646346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.646418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.646645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.646708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.646929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.646992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.647219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.647282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.647507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.647570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.647806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.647869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.648093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.648155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.648385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.648447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.648653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.648716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.648978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.649042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.649251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.649314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.649517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.649580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.649779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.649844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.650046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.650110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.650343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.650405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.650607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.650670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.650879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.650943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.651135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.651198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.651423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.651486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.651684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.651759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.651975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.652039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.652263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.652325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.652553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.652616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.652856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.652920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.653148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.653211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.653443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.653505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.653733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.653811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.654052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.654115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.654341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.654403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.654629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.654692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.654935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.654999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.655225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.655288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.655522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.655584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.655813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.655878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.656076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.656139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.656366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.656428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.656664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.656727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.656992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.657056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.657281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.539 [2024-07-12 16:03:17.657344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.539 qpair failed and we were unable to recover it. 00:26:20.539 [2024-07-12 16:03:17.657567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.657629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.657871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.657946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.658149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.658213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.658440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.658504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.658723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.658798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.659026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.659091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.659295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.659358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.659582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.659644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.659892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.659956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.660181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.660244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.660473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.660536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.660757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.660821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.661046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.661110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.661326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.661389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.661615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.661678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.661906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.661970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.662204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.662267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.662493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.662556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.662783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.662848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.663045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.663107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.663335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.663398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.663622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.663686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.663926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.663989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.664187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.664250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.664455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.664519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.664783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.664847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.665070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.665133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.665335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.665398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.665634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.665697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.665879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.665942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.666168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.666232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.666458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.666521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.666717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.666798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.667002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.667066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.667295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.667358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.667558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.667621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.667850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.667914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.668138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.668201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.668425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.668488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.668683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.668756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.668961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.669024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.669250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.540 [2024-07-12 16:03:17.669329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.540 qpair failed and we were unable to recover it. 00:26:20.540 [2024-07-12 16:03:17.669562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.669625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.669853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.669917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.670086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.670150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.670341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.670404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.670615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.670680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.670919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.670984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.671208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.671270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.671464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.671527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.671762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.671827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.672060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.672122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.672346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.672409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.672637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.672700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.672961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.673025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.673263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.673326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.673555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.673619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.673815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.673881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.674104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.674168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.674372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.674436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.674630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.674693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.674936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.674999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.675225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.675287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.675509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.675573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.675804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.675870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.676075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.676138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.676339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.676402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.676605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.676668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.676924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.676989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.677222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.677286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.677486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.677549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.677778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.677843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.678014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.678078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.678270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.678333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.678561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.678625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.678832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.678897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.679099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.679162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.679365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.679429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.679654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.679717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.679934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.679997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.680226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.680290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.680489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.680561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.680788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.680853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.681055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.681119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.681320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.681383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.681584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.681648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.681891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.681956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.682193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.682256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.682490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.682553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.541 [2024-07-12 16:03:17.682759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.541 [2024-07-12 16:03:17.682824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.541 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.683061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.683124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.683346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.683408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.683632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.683695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.683956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.684020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.684188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.684251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.684492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.684555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.684784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.684848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.685079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.685142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.685342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.685406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.685634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.685697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.685908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.685971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.686174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.686238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.686464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.686526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.686763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.686827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.687020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.687083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.687307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.687369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.687563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.687626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.687869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.687933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.688168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.688232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.688461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.688524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.688789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.688854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.689080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.689143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.689342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.689405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.689646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.689709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.689956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.690019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.690210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.690273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.690500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.690562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.690769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.690833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.691030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.691093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.691315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.691378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.691588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.691651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.691864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.691938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.692167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.692230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.692457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.692520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.692755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.692821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.693021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.693084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.693308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.693370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.693567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.693629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.693854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.693919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.694122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.694185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.694386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.694448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.694679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.694759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.694968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.695032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.695261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.695324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.695546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.695609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.695816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.695882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.542 [2024-07-12 16:03:17.696120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.542 [2024-07-12 16:03:17.696183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.542 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.696378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.696440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.696645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.696708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.696952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.697015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.697239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.697302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.697519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.697581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.697792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.697857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.698084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.698147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.698376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.698439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.698663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.698725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.698974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.699037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.699259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.699322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.699561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.699625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.699822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.699886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.700123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.700186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.700415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.700479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.700701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.700781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.701036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.701099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.701298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.701360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.701590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.701652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.701864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.701929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.702131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.702194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.702395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.702457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.702683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.702763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.702996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.703059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.703257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.703330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.703534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.703598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.703805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.703870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.704088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.704151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.704381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.704444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.704673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.704736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.704971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.705034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.705232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.705295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.705495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.705557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.705723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.705799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.706001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.706065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.706262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.706324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.706549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.706612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.706841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.706905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.707158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.707221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.707447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.707509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.543 qpair failed and we were unable to recover it. 00:26:20.543 [2024-07-12 16:03:17.707736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.543 [2024-07-12 16:03:17.707817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.708052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.708115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.708338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.708400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.708628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.708691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec5c000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.709008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0c80 is same with the state(5) to be set 00:26:20.544 [2024-07-12 16:03:17.709368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.709466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.709699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.709795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.710037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.710104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.710312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.710377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.710608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.710671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.710902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.710967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.711169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.711246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.711458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.711522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.711695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.711779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.711986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.712050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.712271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.712335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.712535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.712599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.712832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.712898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.713131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.713196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.713427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.713492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.713696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.713778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.714019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.714083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.714312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.714376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.714597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.714661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.714902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.714967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.715153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.715218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.715457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.715521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.715691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.715769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.716004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.716069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.716279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.716344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.716554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.716618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.716850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.716916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.717158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.717222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.717456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.717522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.717769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.717852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.718100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.718164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.718393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.718458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.718689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.718767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.719019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.719092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.719325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.719390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.719570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.719634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.719875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.719941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.720181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.720245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.720455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.720519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.720725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.720813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.721049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.721114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.721347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.721411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.544 qpair failed and we were unable to recover it. 00:26:20.544 [2024-07-12 16:03:17.721590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.544 [2024-07-12 16:03:17.721653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.721881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.721948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.722186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.722250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.722451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.722515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.722761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.722827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.723046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.723111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.723312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.723375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.723613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.723677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.723904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.723968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.724145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.724209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.724423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.724488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.724727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.724808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.725041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.725106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.725310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.725374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.725604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.725667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.725920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.725985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.726219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.726284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.726511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.726574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.726822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.726888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.727120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.727183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.727359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.727423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.727620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.727685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.727906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.727970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.728196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.728261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.728438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.728503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.728707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.728795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.729001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.729065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.729275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.729338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.729512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.729575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.729780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.729845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.730073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.730138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.730311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.730383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.730584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.730648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.730898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.730964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.731172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.731235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.731407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.731472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.731669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.731732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.731985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.732049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.732285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.732349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.732555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.732619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.732824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.732889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.733116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.733180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.733384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.733447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.733652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.733716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.733942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.734006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.734250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.734314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.734540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.734604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.734806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.545 [2024-07-12 16:03:17.734872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.545 qpair failed and we were unable to recover it. 00:26:20.545 [2024-07-12 16:03:17.735063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.735099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.735325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.735389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.735591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.735655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.735941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.736007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.736234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.736299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.736531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.736595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.736821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.736887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.737098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.737163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.737369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.737433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.737605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.737670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.737903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.737969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.738207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.738270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.738510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.738574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.738784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.738849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.739078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.739142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.739349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.739413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.739642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.739706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.739925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.739990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.740227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.740292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.740521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.740585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.740817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.740883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.741116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.741181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.741348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.741413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.741615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.741688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.741911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.741975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.742175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.742239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.742405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.742468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.742670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.742734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.742993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.743058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.743291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.743355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.743586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.743650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.743871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.743936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.744168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.744232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.744425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.744488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.744727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.744828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.745060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.745125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.745330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.745395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.745609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.745673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.745916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.745982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.746181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.746245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.746471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.746535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.746773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.746839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.747046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.747110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.747312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.747376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.747547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.747611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.747842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.747906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.748116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.748180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.546 [2024-07-12 16:03:17.748412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.546 [2024-07-12 16:03:17.748476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.546 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.748706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.748786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.748972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.749036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.749249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.749314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.749488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.749551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.749781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.749846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.750046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.750110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.750338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.750401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.750702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.750794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.751113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.751177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.751485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.751549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.751786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.751851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.752063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.752126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.752378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.752442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.752676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.752770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.753009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.753072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.753259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.753336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.753551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.753655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.753987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.754086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.754355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.754425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.754708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.754800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.755022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.755087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.755302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.755373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.755602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.755667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.755938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.756004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.756212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.756278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.756461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.756526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.756772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.756838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.757139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.757205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.757422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.757489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.757777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.757845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.547 qpair failed and we were unable to recover it. 00:26:20.547 [2024-07-12 16:03:17.758036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.547 [2024-07-12 16:03:17.758104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.758361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.758426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.758686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.758764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.759014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.759079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.759400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.759470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.759716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.759796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.760007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.760073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.760419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.760479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.760594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.760619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.760786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.760849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.761078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.761151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.761398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.761469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.761897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.761965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.762218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.762286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.762506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.762570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.762806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.762879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.763109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.763174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.763462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.763527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.763768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.763835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.764159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.764235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.764502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.764567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.764811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.764885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.822 qpair failed and we were unable to recover it. 00:26:20.822 [2024-07-12 16:03:17.765105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.822 [2024-07-12 16:03:17.765171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.765377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.765450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.765663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.765728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.766001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.766081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.766314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.766387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.766601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.766666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.766887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.766958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.767170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.767236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.767488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.767554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.767730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.767822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.768046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.768115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.768331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.768402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.768721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.768815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.769026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.769090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.769344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.769409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.769662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.769727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.770006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.770072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.770389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.770453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.770665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.770730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.771041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.771106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.771358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.771423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.771630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.771693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.771945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.772011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.772220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.772284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.772616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.772680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.772891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.772959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.773280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.773345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.773605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.773670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.773974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.774041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.774274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.774339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.774550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.774615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.774860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.774926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.775177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.775241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.775609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.775678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.775891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.775956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.776215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.776280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.776469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.776534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.776788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.776854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.777094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.777159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.777387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.823 [2024-07-12 16:03:17.777453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.823 qpair failed and we were unable to recover it. 00:26:20.823 [2024-07-12 16:03:17.777636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.777701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.777979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.778044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.778267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.778332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.778677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.778780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.779061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.779126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.779369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.779432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.779687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.779768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.780011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.780076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.780324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.780389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.780793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.780859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.781128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.781193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.781417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.781483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.781754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.781821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.782078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.782142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.782363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.782427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.782682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.782763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.783070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.783135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.783402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.783467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.783720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.783802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.784045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.784110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.784338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.784402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.784694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.784786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.785009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.785073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.785358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.785423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.785685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.785776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.786068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.786133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.786390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.786454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.786649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.786713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.787054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.787122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.787365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.787430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.787659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.787723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.787970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.788035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.788266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.788329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.788561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.788626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.788880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.788946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.789133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.789197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.789410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.789474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.789697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.789787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.790057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.790121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.790334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.790406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.790700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.824 [2024-07-12 16:03:17.790793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.824 qpair failed and we were unable to recover it. 00:26:20.824 [2024-07-12 16:03:17.791069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.791134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.791340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.791409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.791623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.791701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.791990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.792056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.792307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.792371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.792598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.792662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.793014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.793087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.793322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.793387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.793642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.793706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.793965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.794031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.794258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.794323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.794536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.794601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.794922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.794989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.795216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.795281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.795529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.795594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.795798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.795864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.796149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.796214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.796471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.796536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.796774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.796840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.797156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.797231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.797516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.797581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.797799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.797865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.798092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.798157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.798463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.798529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.798769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.798835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.799109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.799175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.799403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.799468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.799676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.799757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.800085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.800153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.800524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.800590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.800837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.800903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.801121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.801186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.801424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.801488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.801802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.801880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.802066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.802132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.802332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.802397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.802634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.802699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.802969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.803034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.803263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.803328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.803647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.803711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.825 [2024-07-12 16:03:17.803980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.825 [2024-07-12 16:03:17.804046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.825 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.804393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.804463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.804695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.804789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.805051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.805115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.805391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.805457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.805630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.805695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.805937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.806003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.806255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.806320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.806554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.806620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.806842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.806908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.807140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.807207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.807489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.807553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.807903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.807970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.808254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.808319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.808691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.808778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.809013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.809078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.809294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.809365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.809556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.809620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.809832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.809899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.810204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.810279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.810587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.810651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.810914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.810980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.811271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.811336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.811661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.811726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.812017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.812082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.812299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.812363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.812573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.812639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.812868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.812941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.813173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.813238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.813486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.813552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.813852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.813930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.814206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.814271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.814516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.814580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.814882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.814960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.826 qpair failed and we were unable to recover it. 00:26:20.826 [2024-07-12 16:03:17.815179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.826 [2024-07-12 16:03:17.815244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.815462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.815526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.815789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.815855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.816152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.816216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.816420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.816485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.816752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.816819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.817152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.817223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.817541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.817606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.817832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.817909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.818156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.818220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.818480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.818545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.818786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.818853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.819095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.819160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.819437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.819502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.819792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.819859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.820048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.820113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.820320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.820385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.820630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.820694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.820995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.821061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.821267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.821332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.821641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.821716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.821949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.822015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.822360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.822426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.822609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.822681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.822970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.823036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.823247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.823318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.823547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.823612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.823800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.823865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.824120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.824185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.824409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.824475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.824770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.824837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.825166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.825230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.825537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.825602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.825848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.825914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.826237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.826302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.826639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.826704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.827017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.827083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.827452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.827519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.827759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.827824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.828128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.828205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.827 [2024-07-12 16:03:17.828442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.827 [2024-07-12 16:03:17.828507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.827 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.828682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.828763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.829035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.829100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.829331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.829396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.829627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.829692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.829891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.829956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.830157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.830222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.830429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.830494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.830698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.830789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.831076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.831141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.831390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.831460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.831832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.831898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.832198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.832264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.832584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.832654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.833001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.833067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.833310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.833375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.833607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.833672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.833992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.834058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.834351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.834416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.834631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.834696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.834955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.835021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.835232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.835297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.835593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.835658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.835863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.835936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.836100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.836165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.836356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.836421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.836646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.836711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.836979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.837045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.837273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.837339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.837603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.837667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.838041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.838107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.838406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.838471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.838722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.838804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.839056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.839122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.839387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.839452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.839702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.839784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.840072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.840137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.840371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.840436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.840768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.840834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.841112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.841177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.841407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.841472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.828 [2024-07-12 16:03:17.841694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.828 [2024-07-12 16:03:17.841793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.828 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.842036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.842109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.842391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.842456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.842777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.842850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.843099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.843165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.843418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.843483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.843802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.843874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.844219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.844305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.844517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.844582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.844837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.844903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.845160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.845225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.845596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.845665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.845905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.845970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.846213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.846277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.846587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.846662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.846917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.846983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.847244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.847309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.847593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.847658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.847974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.848039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.848414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.848484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.848802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.848869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.849132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.849197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.849442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.849507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.849800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.849867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.850149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.850213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.850495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.850560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.850774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.850839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.851074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.851139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.851488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.851553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.851933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.851998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.852223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.852288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.852483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.852548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.852872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.852939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.853239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.853305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.853536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.853601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.853873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.853940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.854238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.854303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.854628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.854699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.854961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.855026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.855232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.855296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.855615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.855683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.829 [2024-07-12 16:03:17.855993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.829 [2024-07-12 16:03:17.856059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.829 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.856301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.856366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.856624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.856689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.856895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.856960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.857153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.857221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.857432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.857502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.857867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.857943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.858176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.858241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.858501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.858567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.858791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.858857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.859147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.859212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.859445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.859509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.859765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.859831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.860037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.860102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.860329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.860394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.860620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.860691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.860945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.861011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.861238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.861302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.861503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.861574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.861811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.861877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.862226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.862295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.862660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.862725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.863040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.863105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.863352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.863427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.863788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.863854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.864108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.864173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.864357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.864423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.864626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.864692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.864889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.864953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.865185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.865250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.865471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.865537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.865789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.865856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.866149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.866214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.866501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.866565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.866792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.866860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.867084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.867149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.867429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.867493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.867726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.867804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.868126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.830 [2024-07-12 16:03:17.868200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.830 qpair failed and we were unable to recover it. 00:26:20.830 [2024-07-12 16:03:17.868512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.868577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.868806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.868872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.869125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.869190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.869410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.869475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.869729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.869806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.870142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.870206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.870457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.870522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.870731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.870822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.871081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.871146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.871429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.871494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.871731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.871811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.872082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.872148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.872432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.872498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.872710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.872798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.873034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.873099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.873416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.873489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.873803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.873871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.874182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.874258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.874587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.874652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.874896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.874962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.875201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.875266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.875616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.875688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.875927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.875992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.876314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.876381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.876725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.876805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.877048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.877113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.877347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.877416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.877704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.877787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.878019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.878085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.878346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.878411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.878666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.878731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.879050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.879114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.879433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.879499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.879798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.879875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.880207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.880272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.880483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.880548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.831 [2024-07-12 16:03:17.880844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.831 [2024-07-12 16:03:17.880909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.831 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.881261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.881333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.881515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.881585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.881788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.881854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.882175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.882245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.882451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.882520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.882727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.882812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.883126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.883200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.883461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.883526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.883778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.883844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.884074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.884140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.884344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.884414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.884698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.884779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.884991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.885057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.885242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.885307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.885559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.885630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.885874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.885939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.886264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.886330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.886577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.886641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.886888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.886953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.887160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.887225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.887429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.887494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.887769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.887835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.888105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.888170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.888379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.888445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.888690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.888786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.889024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.889089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.889302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.889374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.889556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.889621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.889807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.889873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.890079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.890144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.890378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.890443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.890649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.890714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.890998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.891063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.891340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.891406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.891612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.891678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.891904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.891969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.892215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.892280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.892506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.892592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.892906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.832 [2024-07-12 16:03:17.892972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.832 qpair failed and we were unable to recover it. 00:26:20.832 [2024-07-12 16:03:17.893216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.893281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.893511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.893581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.893863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.893929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.894166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.894231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.894472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.894536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.894806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.894871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.895098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.895163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.895369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.895440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.895808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.895874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.896192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.896267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.896460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.896527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.896796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.896862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.897177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.897242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.897487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.897559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.897814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.897881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.898109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.898173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.898421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.898485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.898688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.898772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.898973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.899038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.899239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.899303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.899568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.899633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.899852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.899918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.900157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.900221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.900438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.900503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.900728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.900807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.901023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.901088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.901369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.901434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.901623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.901687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.901884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.901950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.902142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.902207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.902442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.902506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.902685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.902785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.902987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.903052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.903252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.903317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.903501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.903575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.903791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.903858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.904059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.904123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.904326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.904391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.904591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.904666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.904850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.904915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.905168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.833 [2024-07-12 16:03:17.905233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.833 qpair failed and we were unable to recover it. 00:26:20.833 [2024-07-12 16:03:17.905447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.905513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.905722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.905799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.905975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.906053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.906262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.906327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.906511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.906576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.906816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.906883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.907086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.907151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.907350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.907415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.907696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.907779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.907956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.908021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.908227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.908292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.908532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.908598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.908792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.908858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.909061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.909127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.909304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.909369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.909587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.909651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.909853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.909918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.910097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.910163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.910372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.910437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.910632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.910708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.910966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.911032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.911208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.911274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.911446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.911511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.911723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.911803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.912009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.912074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.912299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.912364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.912540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.912605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.912776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.912842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.913045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.913110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.913396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.913462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.913653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.913717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.913947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.914012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.914217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.914282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.914505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.914570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.914763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.914828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.915057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.915121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.915389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.915454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.915640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.915724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.915923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.915988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.916212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.916277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.916485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.834 [2024-07-12 16:03:17.916551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.834 qpair failed and we were unable to recover it. 00:26:20.834 [2024-07-12 16:03:17.916775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.916841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.917080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.917145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.917355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.917420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.917629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.917694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.917885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.917950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.918143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.918209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.918431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.918496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.918793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.918860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.919058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.919123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.919321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.919386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.919656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.919722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.919946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.920011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.920237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.920302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.920568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.920632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.920856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.920922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.921112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.921176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.921375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.921441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.921613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.921677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.921867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.921933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.922145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.922210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.922495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.922559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.922767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.922834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.923025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.923089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.923287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.923352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.923558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.923629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.923855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.923921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.924117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.924182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.924405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.924471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.924702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.924785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.925043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.925108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.925314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.925380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.925632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.925697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.925928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.925995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.926230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.926296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.926497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.926562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.835 qpair failed and we were unable to recover it. 00:26:20.835 [2024-07-12 16:03:17.926813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.835 [2024-07-12 16:03:17.926881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.927078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.927152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.927425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.927490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.927770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.927837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.928058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.928121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.928341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.928406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.928660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.928725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.928970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.929034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.929247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.929311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.929540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.929605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.929871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.929937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.930141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.930206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.930410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.930475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.930702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.930786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.931077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.931142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.931428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.931494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.931718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.931800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.931976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.932050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.932275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.932340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.932583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.932648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.932937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.933002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.933201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.933265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.933466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.933531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.933768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.933835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.934036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.934108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.934287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.934352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.934577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.934643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.934952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.935019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.935302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.935366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.935581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.935646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.935891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.935957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.936221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.936287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.936507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.936571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.936794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.936862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.937036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.937102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.937324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.937389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.937620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.937685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.937922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.937988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.938216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.938282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.938519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.938584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.938776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.938842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.836 qpair failed and we were unable to recover it. 00:26:20.836 [2024-07-12 16:03:17.939042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.836 [2024-07-12 16:03:17.939116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.939289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.939355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.939577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.939642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.939828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.939894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.940220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.940286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.940510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.940574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.940822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.940889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.941163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.941228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.941447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.941511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.941688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.941772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.941979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.942043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.942212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.942277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.942489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.942553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.942718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.942839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.943103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.943169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.943495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.943571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.943770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.943836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.944107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.944172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.944404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.944468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.944650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.944715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.944934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.944999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.945247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.945313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.945483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.945547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.945769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.945835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.946129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.946194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.946420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.946485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.946813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.946880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.947066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.947130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.947342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.947406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.947640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.947706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.947932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.947997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.948175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.948240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.948443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.948508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.948805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.948871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.949077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.949142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.949304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.949369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.949600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.949666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.949880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.949947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.950132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.950197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.950364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.950429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.950646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.950725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.837 qpair failed and we were unable to recover it. 00:26:20.837 [2024-07-12 16:03:17.950945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.837 [2024-07-12 16:03:17.951011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.951187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.951252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.951452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.951518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.951771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.951837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.952097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.952164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.952342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.952407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.952616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.952680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.952893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.952959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.953170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.953234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.953462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.953527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.953721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.953804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.954004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.954068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.954265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.954330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.954519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.954594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.954835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.954901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.955129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.955193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.955391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.955456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.955652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.955717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.955939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.956004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.956261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.956326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.956574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.956639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.956911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.956977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.957220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.957285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.957512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.957577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.957839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.957907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.958105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.958170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.958409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.958474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.958676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.958756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.958938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.959004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.959230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.959294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.959527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.959592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.959794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.959864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.960131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.960195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.960421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.960485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.960722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.960799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.961033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.961098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.961273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.961338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.961555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.961620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.961917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.961983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.962176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.962250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.962466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.838 [2024-07-12 16:03:17.962531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.838 qpair failed and we were unable to recover it. 00:26:20.838 [2024-07-12 16:03:17.962812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.962879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.963062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.963127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.963335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.963399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.963594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.963659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.963900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.963967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.964162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.964227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.964430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.964494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.964686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.964770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.964974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.965039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.965355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.965420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.965654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.965718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.965926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.965992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.966266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.966332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.966525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.966600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.966829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.966896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.967145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.967218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.967474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.967538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.967814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.967881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.968091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.968157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.968388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.968452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.968719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.968801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.969068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.969133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.969506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.969577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.969888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.969955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.970202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.970267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.970510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.970575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.970797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.970864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.971056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.971120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.971288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.971362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.971570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.971635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.971978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.972049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.972320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.972384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.972638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.972703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.972909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.972973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.973228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.973292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.839 [2024-07-12 16:03:17.973524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.839 [2024-07-12 16:03:17.973589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.839 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.973825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.973892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.974126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.974192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.974368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.974452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.974624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.974690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.974961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.975027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.975249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.975314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.975500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.975565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.975799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.975867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.976160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.976226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.976468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.976533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.976755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.976821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.977053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.977119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.977341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.977406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.977662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.977727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.977921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.977987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.978190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.978256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.978538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.978603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.978830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.978896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.979155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.979221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.979426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.979492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.979731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.979812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.979983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.980054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.980293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.980358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.980556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.980620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.980855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.980922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.981118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.981184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.981371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.981436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.981671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.981749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.981950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.982014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.982227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.982292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.982501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.982565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.982764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.982829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.983004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.983068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.983260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.983335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.983539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.983604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.983801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.983868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.984098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.984162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.984341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.984405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.984601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.984666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.984858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.984923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.840 [2024-07-12 16:03:17.985156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.840 [2024-07-12 16:03:17.985221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.840 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.985437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.985501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.985755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.985831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.986126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.986191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.986379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.986452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.986724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.986806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.987011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.987074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.987258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.987322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.987545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.987610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.987831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.987898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.988151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.988216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.988455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.988520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.988769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.988835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.989040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.989111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.989369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.989434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.989603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.989667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.989900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.989965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.990226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.990291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.990552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.990616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.990897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.990964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.991199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.991263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.991493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.991557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.991791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.991858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.992177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.992249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.992467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.992535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.992767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.992833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.993075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.993139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.993342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.993406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.993573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.993639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.993897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.993963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.994219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.994283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.994544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.994609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.994887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.994953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.995146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.995210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.995467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.995532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.995703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.995785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.996009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.996075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.996371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.996436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.996703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.996785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.996989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.997059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.997315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.997380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.841 qpair failed and we were unable to recover it. 00:26:20.841 [2024-07-12 16:03:17.997588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.841 [2024-07-12 16:03:17.997652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:17.997920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:17.997995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:17.998186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:17.998248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:17.998473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:17.998537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:17.998783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:17.998850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:17.999065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:17.999130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:17.999337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:17.999402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:17.999759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:17.999825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.000132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.000198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.000397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.000462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.000666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.000731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.000930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.000995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.001196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.001261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.001548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.001613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.001844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.001911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.002104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.002170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.002373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.002438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.002647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.002713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.002899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.002964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.003169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.003235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.003459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.003524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.003735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.003822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.004020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.004086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.004326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.004390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.004586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.004651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.004833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.004899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.005098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.005163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.005418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.005483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.005663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.005729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.005952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.006017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.006275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.006340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.006530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.006594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.006790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.006857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.007037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.007113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.007325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.007390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.007585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.007650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.007861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.007927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.008237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.008313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.008549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.008613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.008776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.008839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.009037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.009102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.842 qpair failed and we were unable to recover it. 00:26:20.842 [2024-07-12 16:03:18.009276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.842 [2024-07-12 16:03:18.009362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.009589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.009653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.009873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.009938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.010116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.010182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.010389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.010454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.010710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.010792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.011018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.011082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.011295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.011359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.011582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.011647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.011839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.011905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.012098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.012162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.012399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.012465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.012671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.012735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.012924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.012988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.013215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.013280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.013654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.013720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.013935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.014001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.014200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.014265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.014444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.014508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.014758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.014825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.015021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.015086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.015286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.015351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.015585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.015650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.015847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.015913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.016135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.016199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.016385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.016449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.016660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.016724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.016926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.016992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.017194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.017259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.017549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.017613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.017815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.017882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.018169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.018234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.018445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.018509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.018727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.018809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.018992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.019058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.019258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.843 [2024-07-12 16:03:18.019322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.843 qpair failed and we were unable to recover it. 00:26:20.843 [2024-07-12 16:03:18.019486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.019562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.019777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.019845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.020045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.020109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.020340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.020404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.020639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.020713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.020926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.020991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.021189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.021255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.021439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.021503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.021711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.021791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.022066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.022132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.022358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.022423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.022654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.022719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.022934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.023000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.023197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.023262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.023506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.023571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.023796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.023862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.024089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.024155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.024418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.024484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.024752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.024819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.025027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.025093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.025269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.025334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.025540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.025605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.025898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.025964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.026260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.026325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.026572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.026638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.026935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.027002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.027192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.027257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.027453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.027519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.027793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.027860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.028091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.028157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.028352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.028417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.028689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.028784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.028969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.029035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.029218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.029284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.029505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.029570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.029766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.029848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.030087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.030153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.030338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.030403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.030622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.030687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.030918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.030984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.031257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.844 [2024-07-12 16:03:18.031322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.844 qpair failed and we were unable to recover it. 00:26:20.844 [2024-07-12 16:03:18.031511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.031576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.031776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.031842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.032076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.032142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.032325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.032400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.032586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.032652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.032880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.032947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.033182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.033248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.033439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.033504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.033684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.033769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.033976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.034042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.034241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.034315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.034509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.034574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.034789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.034854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.035082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.035148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.035322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.035388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.035596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.035661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.036037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.036104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.036335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.036400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.036586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.036650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.036926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.036993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.037175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.037240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.037488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.037553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.037766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.037832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.038062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.038128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.038336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.038402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.038600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.038665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.038940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.039007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.039184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.039249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.039462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.039531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.039772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.039839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.040111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.040185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.040384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.040450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.040625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.040691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.040930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.040996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.041221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.041287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.041537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.041601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.041807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.041873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.042056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.042121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.042340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.042404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.042653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.042718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.042937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.845 [2024-07-12 16:03:18.043003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.845 qpair failed and we were unable to recover it. 00:26:20.845 [2024-07-12 16:03:18.043209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.043273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.043499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.043565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.043796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.043864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.044067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.044133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.044332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.044398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.044687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.044783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.045038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.045104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.045287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.045352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.045608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.045674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.045865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.045932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.046153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.046218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.046422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.046493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.046783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.046849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.047019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.047084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.047252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.047318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.047525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.047590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.047845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.047912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.048177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.048243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.048497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.048561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.048759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.048826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.049053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.049117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.049285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.049350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.049557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.049622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.049850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.049916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.050121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.050185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.050389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.050455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.050760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.050826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.051045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.051110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.051278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.051343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.051551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.051625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.051824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.051891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.052076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.052147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.052337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.052413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.052696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.052795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.053092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.053157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.053385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.053449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.053756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.053824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.054072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.054138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.054306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.054370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.054562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.054627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.054816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.846 [2024-07-12 16:03:18.054882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.846 qpair failed and we were unable to recover it. 00:26:20.846 [2024-07-12 16:03:18.055143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.055208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.055450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.055514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.055815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.055881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.056120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.056185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.056388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.056455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.056689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.056764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.056973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.057038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.057298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.057364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.057527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.057591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.057764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.057829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.058054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.058120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.058323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.058388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.058586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.058651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.058903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.058969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.059179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.059245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.059484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.059550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.059810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.059876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.060080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.060145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.060438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.060504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.060856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.060923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.061290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.061357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.061686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.061775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.062037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.062102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.062334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.062399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.062795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.062861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.063210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.063274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.063497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.063562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.063846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.063913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.064160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.064233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.064601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.064666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.064864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.064930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.065205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.065269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.065618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.065683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.065919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.065984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.066226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.066292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.066532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.066598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.066846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.066910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.067098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.067164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.067429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.847 [2024-07-12 16:03:18.067494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.847 qpair failed and we were unable to recover it. 00:26:20.847 [2024-07-12 16:03:18.067722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.067803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.067976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.068042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.068212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.068278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.068520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.068585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.068845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.068911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.069144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.069209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.069417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.069488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.069724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.069808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.070075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.070141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.070367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.070440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.070783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.070856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.071096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.071161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.071381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.071446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.071660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.071725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.071926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.071992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.072257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.072322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.072506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.072572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.072780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.072847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.073171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.073240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.073546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.073611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.073834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.073901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.074144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.074209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.074480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.074546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.074770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.074836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.075070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.075136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.075444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.075519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.075748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.075813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.076025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.076090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.076265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.076331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.076552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.076626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.076941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.077009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.077227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.077292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.077485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.848 [2024-07-12 16:03:18.077559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.848 qpair failed and we were unable to recover it. 00:26:20.848 [2024-07-12 16:03:18.077827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.077894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.078097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.078162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.078389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.078454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.078621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.078686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.078875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.078941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.079109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.079184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.079392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.079458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.079830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.079897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.080147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.080213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.080530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.080602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.080911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.080977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.081172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.081244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.081571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.081636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.081853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.081920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.082237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.082313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.082547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.082612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.082823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.082889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.083181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.083246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.083492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.083558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.083788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.083855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.084036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.084111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.084404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.084469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.084654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.084720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.084956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.085022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.085223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.085288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.085542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.085607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.085860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.085926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.086097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.086173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.086428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.086492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.086848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.086915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.087144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.087216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.087447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.087511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.087700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.087778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.087954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.088019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.088192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.088258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.088470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.088540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.088865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.088939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.089225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.089290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.089659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.089735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.089992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.090057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.849 [2024-07-12 16:03:18.090373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.849 [2024-07-12 16:03:18.090448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.849 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.090651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.090717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.090952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.091017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.091251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.091317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.091527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.091604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.091789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.091855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.092062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.092127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.092430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.092504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.092768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.092835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.093076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.093142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.093425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.093490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.093734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.093821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.094056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.094120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.094342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.094407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.094660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.094725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.094943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.095008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.095194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.095259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.095523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.095593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.095859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.095926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.096256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.096322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.096543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.096615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.096843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.096908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.097276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.097341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.097658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.097724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.097915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.097980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.098279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.098344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.098529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.098591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.098852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.098919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.099159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.099225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.099468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.099533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.099808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.099875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.100081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.100146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.100399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.100464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.100650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.100717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:20.850 [2024-07-12 16:03:18.100956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:20.850 [2024-07-12 16:03:18.101021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:20.850 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.101278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.101343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.101607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.101681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.101904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.101970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.102209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.102274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.102456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.102521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.102749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.102816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.103039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.103113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.103346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.103412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.103699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.103782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.104035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.104100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.104381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.104457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.104692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.104809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.105008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.105073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.105313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.105389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.105593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.105658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.105918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.105988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.106258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.106324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.106676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.106777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.107109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.107174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.107391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.107456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.107723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.107804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.108031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.108097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.108303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.108371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.108693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.108794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.109078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.109144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.109355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.109419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.109637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.109701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.109910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.109974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.110196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.110261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.110464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.110528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.110771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.110837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.111122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.111187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.111397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.111462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.111709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.111791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.112001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.112066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.128 [2024-07-12 16:03:18.112302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.128 [2024-07-12 16:03:18.112367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.128 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.112712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.112801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.113057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.113121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.113381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.113446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.113634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.113699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.113930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.113995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.114365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.114442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.114759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.114824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.115145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.115217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.115418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.115483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.115685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.115767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.115989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.116055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.116335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.116401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.116598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.116664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.116877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.116945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.117146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.117211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.117469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.117535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.117803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.117870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.118083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.118148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.118373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.118438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.118686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.118764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.118997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.119062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.119279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.119344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.119528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.119593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.119816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.119851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.119955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.119989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.120114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.120149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.120371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.120405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.120531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.120566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.120794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.120847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.120958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.120992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.121171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.121206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.121344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.121389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.121496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.121522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.121685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.121767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.121938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.121963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.122102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.122179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.122393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.122458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.122648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.122683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.122833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.122870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.123012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.129 [2024-07-12 16:03:18.123091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.129 qpair failed and we were unable to recover it. 00:26:21.129 [2024-07-12 16:03:18.123309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.123345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.123481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.123547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.123715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.123797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.124013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.124078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.124308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.124374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.124601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.124675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.124892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.124958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.125154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.125220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.125441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.125506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.125735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.125819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.126000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.126065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.126289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.126355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.126679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.126760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.126968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.127033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.127254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.127319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.127531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.127596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.127777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.127844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.128082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.128146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.128370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.128440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.128690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.128768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.128947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.129012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.129195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.129268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.129474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.129539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.129712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.129790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.129997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.130062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.130295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.130360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.130620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.130684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.130895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.130961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.131286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.131353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.131566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.131631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.131847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.131913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.132131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.132195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.132430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.132496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.132704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.132804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.132987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.133055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.133272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.133337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.133557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.133621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.133873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.133939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.134160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.134229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.134415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.134479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.130 [2024-07-12 16:03:18.134674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.130 [2024-07-12 16:03:18.134755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.130 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.134938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.135004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.135257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.135321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.135633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.135698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.135929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.135995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.136241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.136315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.136541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.136606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.136815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.136881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.137154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.137219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.137476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.137541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.137731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.137810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.137984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.138051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.138274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.138339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.138579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.138643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.138861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.138926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.139143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.139209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.139440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.139505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.139670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.139734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.139978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.140055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.140286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.140352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.140585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.140655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.140873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.140939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.141119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.141189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.141392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.141464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.141849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.141917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.142195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.142260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.142626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.142698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.142901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.142966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.143188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.143253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.143479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.143544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.143812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.143878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.144085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.144150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.144395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.144460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.144655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.144720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.144936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.145001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.145192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.145267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.145490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.145562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.145792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.145859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.146044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.146110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.146301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.146367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.131 [2024-07-12 16:03:18.146594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.131 [2024-07-12 16:03:18.146657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.131 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.146871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.146937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.147183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.147249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.147472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.147538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.147829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.147896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.148216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.148291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.148546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.148612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.148835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.148901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.149134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.149199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.149427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.149492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.149716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.149810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.149988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.150053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.150303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.150367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.150609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.150673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.150865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.150931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.151172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.151237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.151513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.151578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.151822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.151890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.152100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.152165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.152370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.152435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.152678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.152766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.153001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.153066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.153298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.153362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.153613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.153679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.153868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.153933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.154213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.154278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.154485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.154551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.154766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.154832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.155049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.155114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.155305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.155370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.155579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.155644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.155894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.155962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.156229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.156294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.156496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.156568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.132 qpair failed and we were unable to recover it. 00:26:21.132 [2024-07-12 16:03:18.156776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.132 [2024-07-12 16:03:18.156843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.157066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.157131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.157361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.157426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.157755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.157820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.158018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.158084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.158312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.158376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.158687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.158774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.158952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.159016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.159219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.159288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.159531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.159595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.159849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.159915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.160087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.160161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.160503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.160567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.160781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.160847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.161055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.161118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.161296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.161369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.161623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.161687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.161951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.162016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.162222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.162289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.162547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.162611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.162829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.162895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.163116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.163181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.163432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.163497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.163761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.163827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.164016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.164080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.164275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.164340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.164572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.164637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.165010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.165080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.165295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.165359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.165615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.165680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.165943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.166009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.166237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.166302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.166528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.166592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.166816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.166883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.167173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.167238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.167463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.167528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.167777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.167844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.168130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.168196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.168480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.168545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.168772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.168839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.169046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.133 [2024-07-12 16:03:18.169116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.133 qpair failed and we were unable to recover it. 00:26:21.133 [2024-07-12 16:03:18.169485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.169554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.169799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.169865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.170115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.170180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.170393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.170457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.170696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.170779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.171095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.171157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.171457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.171522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.171811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.171878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.172191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.172266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.172622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.172687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.173062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.173139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.173364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.173429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.173763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.173829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.174106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.174171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.174410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.174474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.174800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.174876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.175166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.175231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.175457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.175521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.175718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.175810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.176015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.176088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.176345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.176410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.176600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.176665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.176899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.176966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.177144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.177208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.177474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.177540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.177779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.177845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.178097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.178162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.178510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.178585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.178843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.178909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.179125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.179189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.179440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.179504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.179770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.179836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.180156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.180232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.180489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.180553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.180919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.180992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.181298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.181363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.181577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.181642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.181988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.182054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.182370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.182434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.182655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.182720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.134 [2024-07-12 16:03:18.183055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.134 [2024-07-12 16:03:18.183132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.134 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.183383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.183448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.183667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.183732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.184042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.184106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.184374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.184439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.184603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.184668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.184961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.185027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.185397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.185467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.185709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.185810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.186079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.186143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.186390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.186455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.186694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.186791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.187098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.187165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.187449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.187515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.187848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.187922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.188173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.188238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.188472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.188542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.188730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.188808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.189055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.189120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.189366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.189442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.189698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.189780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.190017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.190081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.190259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.190329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.190547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.190611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.190885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.190951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.191170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.191234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.191483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.191548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.191790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.191857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.192067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.192131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.192402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.192466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.192764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.192830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.193079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.193143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.193390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.193455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.193642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.193717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.193945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.194010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.194314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.194378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.194657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.194722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.195042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.195110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.195389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.195454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.195754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.195821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.196082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.196148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.135 [2024-07-12 16:03:18.196405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.135 [2024-07-12 16:03:18.196469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.135 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.196703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.196785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.197042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.197107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.197342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.197406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.197602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.197666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.197947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.198013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.198388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.198453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.198661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.198726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.199073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.199147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.199441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.199506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.199758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.199824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.200033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.200097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.200290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.200362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.200614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.200678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.201029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.201105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.201339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.201403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.201646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.201710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.201985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.202050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.202222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.202286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.202513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.202577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.202806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.202873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.203098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.203174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.203455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.203520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.203795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.203860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.204069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.204134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.204340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.204404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.204633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.204697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.204952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.205017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.205259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.205324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.205524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.205589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.205778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.205844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.206023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.206088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.206313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.206378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.206554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.206619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.206825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.206890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.207142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.136 [2024-07-12 16:03:18.207178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.136 qpair failed and we were unable to recover it. 00:26:21.136 [2024-07-12 16:03:18.207372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.207446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.207673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.207773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.207954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.208018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.208244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.208309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.208535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.208599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.208875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.208942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.209156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.209232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.209489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.209554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.209767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.209833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.210039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.210105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.210397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.210462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.210670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.210735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.210926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.210991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.211216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.211281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.211545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.211610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.211788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.211854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.212052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.212117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.212291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.212357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.212531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.212595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.212777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.212844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.213073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.213138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.213341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.213406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.213602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.213666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.213847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.213913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.214271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.214346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.214567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.214632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.214864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.214930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.215151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.215217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.215484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.215550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.215787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.215854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.216030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.216095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.216350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.216414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.216728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.216810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.217013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.217083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.217311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.217376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.217590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.217654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.217885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.217950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.218265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.218339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.218701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.218762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.218875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.218911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.137 [2024-07-12 16:03:18.219025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.137 [2024-07-12 16:03:18.219072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.137 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.219228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.219264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.219436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.219472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.219610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.219646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.219843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.219882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.219992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.220041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.220183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.220216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.220437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.220492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.220765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.220824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.220940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.220974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.221112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.221168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.221361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.221419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.221612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.221667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.221869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.221904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.222047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.222081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.222222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.222278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.222450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.222505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.222646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.222701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.222880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.222914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.223017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.223051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.223215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.223270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.223416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.223482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.223669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.223723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.223934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.223968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.224099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.224133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.224331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.224364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.224529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.224583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.224789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.224824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.224966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.225000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.225150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.225205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.225401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.225465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.225722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.225804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.225937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.225972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.226150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.226205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.226444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.226500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.226646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.226686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.226812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.226846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.226953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.226986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.227115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.227149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.227358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.227411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.227583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.227655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.138 [2024-07-12 16:03:18.227851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.138 [2024-07-12 16:03:18.227885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.138 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.228024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.228058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.228196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.228264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.228449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.228512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.228666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.228721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.228870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.228904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.229012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.229057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.229222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.229279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.229549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.229605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.229801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.229836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.229978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.230011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.230220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.230274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.230438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.230472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.230699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.230786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.230891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.230925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.231065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.231099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.231247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.231303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.231482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.231537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.231716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.231795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.231910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.231944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.232137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.232203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.232487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.232544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.232714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.232793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.232911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.232944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.233077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.233110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.233303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.233363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.233619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.233682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.233904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.233938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.234062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.234095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.234249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.234303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.234503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.234558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.234713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.234793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.234931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.234965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.235078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.235115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.235232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.235295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.235448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.235503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.235752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.235817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.235958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.235993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.236159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.236185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.236310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.236340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.236530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.139 [2024-07-12 16:03:18.236585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.139 qpair failed and we were unable to recover it. 00:26:21.139 [2024-07-12 16:03:18.236775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.236810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.236924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.236958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.237103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.237157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.237288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.237343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.237490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.237544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.237771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.237815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.237959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.237993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.238160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.238222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.238353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.238408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.238623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.238677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.238851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.238886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.239000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.239033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.239201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.239264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.239483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.239538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.239701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.239803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.239950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.239986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.240175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.240230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.240380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.240444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.240650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.240704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.240874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.240908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.241021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.241055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.241260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.241314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.241560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.241615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.241817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.241852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.241986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.242019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.242244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.242299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.242500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.242555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.242702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.242767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.242901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.242934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.243096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.243129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.243350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.243383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.243538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.243592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.243791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.243854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.243973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.244006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.244145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.244203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.244432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.140 [2024-07-12 16:03:18.244487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.140 qpair failed and we were unable to recover it. 00:26:21.140 [2024-07-12 16:03:18.244671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.244726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.244901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.244935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.245095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.245134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.245304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.245338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.245466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.245521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.245722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.245799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.245914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.245947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.246113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.246170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.246329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.246388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.246569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.246624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.246813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.246848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.246957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.246990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.247166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.247221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.247429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.247484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.247656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.247711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.247884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.247918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.248054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.248089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.248260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.248310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.248490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.248545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.248731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.248807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.248941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.141 [2024-07-12 16:03:18.248974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.141 qpair failed and we were unable to recover it. 00:26:21.141 [2024-07-12 16:03:18.249129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.249184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.249425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.249480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.249685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.249750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.249888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.249921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.250064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.250097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.250322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.250377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.250557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.250613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.250830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.250864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.250982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.251016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.251241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.251295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.251507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.251562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.251767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.251818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.251950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.251984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.252144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.252198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.252384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.252439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.252588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.252643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.252823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.252857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.252986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.253043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.253212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.253266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.253421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.253475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.253666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.253719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.253876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.253915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.254046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.254098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.254238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.254290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.254468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.254520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.254717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.254798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.254933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.254967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.142 qpair failed and we were unable to recover it. 00:26:21.142 [2024-07-12 16:03:18.255148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.142 [2024-07-12 16:03:18.255200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.255345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.255396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.255541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.255593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.255832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.255867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.255985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.256018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.256177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.256232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.256381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.256436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.256641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.256696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.256876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.256910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.257017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.257072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.257252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.257309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.257480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.257531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.257701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.257764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.257889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.257922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.258066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.258117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.258288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.258346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.258530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.258581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.258790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.258823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.258959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.258993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.259131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.259182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.259384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.259435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.259595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.259657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.259837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.259871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.259985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.260018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.260190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.260241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.260413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.260464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.260661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.260714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.260896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.143 [2024-07-12 16:03:18.260930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.143 qpair failed and we were unable to recover it. 00:26:21.143 [2024-07-12 16:03:18.261076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.261127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.261297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.261348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.261533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.261585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.261731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.261809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.261925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.261958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.262109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.262161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.262331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.262390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.262571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.262622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.262773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.262824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.262958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.262992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.263180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.263231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.263383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.263434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.263629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.263680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.263859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.263894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.264030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.264089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.264236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.264293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.264524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.264575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.264717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.264793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.264921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.264954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.265100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.265151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.265331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.265382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.265556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.265608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.265793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.265826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.265943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.265976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.266088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.266140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.266311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.266362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.266535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.266586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.144 qpair failed and we were unable to recover it. 00:26:21.144 [2024-07-12 16:03:18.266782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.144 [2024-07-12 16:03:18.266816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.266924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.266958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.267146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.267207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.267383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.267433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.267606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.267657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.267821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.267875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.268036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.268088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.268236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.268287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.268461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.268513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.268663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.268713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.268878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.268930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.269136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.269187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.269375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.269427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.269631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.269682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.269840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.269892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.270092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.270144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.270324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.270376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.270552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.270603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.270782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.270835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.270988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.271047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.271216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.271267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.271439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.271489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.271694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.271755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.271931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.271981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.272125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.272176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.272344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.272395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.272596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.272648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.272845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.272897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.145 [2024-07-12 16:03:18.273055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.145 [2024-07-12 16:03:18.273106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.145 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.273244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.273296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.273443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.273495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.273636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.273687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.273844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.273896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.274073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.274126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.274308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.274359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.274492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.274543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.274763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.274816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.274961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.275013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.275147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.275198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.275361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.275412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.275577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.275629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.275804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.275857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.276004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.276064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.276238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.276289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.276425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.276477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.276635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.276687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.276873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.276926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.277069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.277119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.277371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.277423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.277627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.277679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.277837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.277889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.278042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.278093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.278270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.278321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.278515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.278566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.278769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.278821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.278973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.146 [2024-07-12 16:03:18.279024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.146 qpair failed and we were unable to recover it. 00:26:21.146 [2024-07-12 16:03:18.279197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.279249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.279395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.279446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.279594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.279645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.279794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.279854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.280030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.280082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.280223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.280274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.280458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.280510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.280686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.280751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.280902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.280940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.281091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.281132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.281289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.281329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.281503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.281544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.281658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.281698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.281838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.281863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.281965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.281991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.282083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.282108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.282227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.282253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.282379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.282404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.282518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.282544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.282636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.282661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.282783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.282809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.282931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.282956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.283075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.283116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.147 [2024-07-12 16:03:18.283240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.147 [2024-07-12 16:03:18.283281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.147 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.283441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.283482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.283630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.283671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.283830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.283857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.283957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.283983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.284088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.284122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.284340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.284373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.284522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.284563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.284708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.284760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.284866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.284891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.284992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.285032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.285241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.285282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.285432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.285473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.285598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.285639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.285797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.285825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.285922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.285947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.286068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.286093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.286234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.286274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.286414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.286454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.286655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.286695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.286814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.286843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.286941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.286967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.287066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.287091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.287232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.287258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.287463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.287502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.287624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.287648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.287748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.287774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.287869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.287895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.287993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.288033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.288152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.288177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.148 [2024-07-12 16:03:18.288291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.148 [2024-07-12 16:03:18.288315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.148 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.288455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.288480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.288585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.288611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.288697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.288741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.288857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.288884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.289004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.289030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.289165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.289204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.289339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.289378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.289509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.289534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.289656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.289681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.289801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.289827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.289946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.289971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.290126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.290165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.290283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.290307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.290477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.290503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.290630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.290669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.290795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.290821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.290921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.290947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.291036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.291061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.291193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.291217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.291330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.291355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.291498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.291523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.291699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.291745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.291832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.291858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.291978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.292004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.292137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.292177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.292338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.292377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.292506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.292531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.292646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.292671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.292777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.292803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.149 qpair failed and we were unable to recover it. 00:26:21.149 [2024-07-12 16:03:18.292894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.149 [2024-07-12 16:03:18.292924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.293019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.293059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.293183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.293207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.293402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.293426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.293542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.293567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.293702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.293728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.293830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.293856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.293978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.294004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.294137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.294162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.294303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.294327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.294469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.294494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.294626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.294651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.294794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.294820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.294914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.294940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.295071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.295096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.295206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.295230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.295362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.295386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.295565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.295589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.295679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.295703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.295822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.295849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.295944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.295970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.296114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.296138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.296335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.296359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.296485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.296509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.296648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.296673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.296789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.296815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.296941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.296984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.297171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.297233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.297423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.297468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.297628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.150 [2024-07-12 16:03:18.297674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.150 qpair failed and we were unable to recover it. 00:26:21.150 [2024-07-12 16:03:18.297845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.297888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.298019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.298081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.298216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.298261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.298419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.298465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.298600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.298645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.298801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.298845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.298973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.299015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.299222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.299267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.299426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.299470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.299620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.299662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.299794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.299837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.300001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.300043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.300190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.300233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.300370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.300413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.300572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.300615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.300772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.300815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.300944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.300987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.301164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.301207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.301364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.301406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.301560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.301604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.301765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.301809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.301934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.301977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.302125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.302168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.302294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.302337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.302527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.302570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.302725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.302799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.302926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.302969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.303126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.151 [2024-07-12 16:03:18.303168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.151 qpair failed and we were unable to recover it. 00:26:21.151 [2024-07-12 16:03:18.303370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.303413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.303569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.303613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.303767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.303811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.303928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.303971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.304151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.304195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.304351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.304394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.304527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.304569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.304772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.304815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.304941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.304984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.305137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.305187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.305368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.305411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.305528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.305571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.305779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.305824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.305974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.306017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.306186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.306229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.306421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.306464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.306598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.306641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.306794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.306838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.306961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.307004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.307170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.307213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.307382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.307425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.307630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.307673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.307806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.307853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.308004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.308048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.308206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.308250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.308451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.308493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.308685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.308729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.308893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.308936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.309102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.309162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.152 [2024-07-12 16:03:18.309342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.152 [2024-07-12 16:03:18.309400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.152 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.309621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.309664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.309810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.309853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.309998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.310050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.310278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.310339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.310469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.310512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.310707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.310762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.310906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.310949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.311084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.311147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.311348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.311407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.311575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.311618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.311786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.311832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.311970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.312013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.312220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.312264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.312419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.312462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.312653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.312696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.312856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.312900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.313084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.313144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.313337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.313381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.313506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.313549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.313682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.313731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.313911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.313954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.314115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.314157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.314277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.314321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.153 qpair failed and we were unable to recover it. 00:26:21.153 [2024-07-12 16:03:18.314474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.153 [2024-07-12 16:03:18.314517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.314691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.314733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.314889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.314932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.315133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.315184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.315356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.315400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.315521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.315564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.315750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.315796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.315936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.315979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.316144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.316186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.316354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.316397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.316539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.316582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.316759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.316806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.316951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.316994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.317147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.317190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.317327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.317370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.317533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.317576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.317708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.317761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.317907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.317950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.318138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.318189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.318351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.318393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.318584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.318629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.318785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.318829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.318961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.319005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.319217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.319260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.319383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.319427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.319649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.319692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.319900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.319943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.320113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.320184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.154 [2024-07-12 16:03:18.320345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.154 [2024-07-12 16:03:18.320388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.154 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.320546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.320590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.320797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.320840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.320974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.321017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.321162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.321205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.321411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.321454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.321591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.321634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.321797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.321841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.321987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.322041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.322177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.322219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.322393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.322436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.322639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.322682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.322871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.322916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.323068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.323111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.323279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.323322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.323486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.323538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.323756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.323801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.323947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.323991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.324120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.324162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.324332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.324376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.324569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.324613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.324795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.324840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.324986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.325030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.325222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.325265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.325463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.325506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.325704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.325757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.325923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.325983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.326130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.326191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.326369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.326412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.326585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.326627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.326771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.326816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.326967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.327030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.327177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.327220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.327412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.327455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.327655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.327699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.327872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.327915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.328071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.328119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.155 qpair failed and we were unable to recover it. 00:26:21.155 [2024-07-12 16:03:18.328279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.155 [2024-07-12 16:03:18.328322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.328488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.328531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.328699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.328752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.328933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.328977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.329241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.329295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.329453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.329496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.329657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.329700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.329861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.329904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.330071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.330114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.330309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.330352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.330507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.330549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.330720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.330791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.330938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.330981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.331136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.331179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.331381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.331423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.331626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.331669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.331837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.331882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.332040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.332102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.332268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.332311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.332511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.332554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.332762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.332806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.332949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.333012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.333218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.333277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.333490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.333548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.333707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.333758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.333929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.333992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.334195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.334256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.334442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.334503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.334721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.334772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.334924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.334985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.335173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.335216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.335411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.335454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.335645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.335687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.335882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.335944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.336163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.336225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.336382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.336425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.336564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.336607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.336810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.336875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.337104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.337147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.337402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.337445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.337674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.156 [2024-07-12 16:03:18.337716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.156 qpair failed and we were unable to recover it. 00:26:21.156 [2024-07-12 16:03:18.337874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.337938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.338129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.338190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.338365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.338425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.338571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.338614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.338772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.338815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.338995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.339054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.339241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.339303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.339514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.339557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.339787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.339832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.340012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.340071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.340272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.340339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.340534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.340576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.340815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.340876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.341091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.341151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.341315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.341375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.341551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.341594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.341785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.341844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.341987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.342039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.342207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.342250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.342442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.342485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.342679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.342722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.342885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.342928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.343132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.343174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.343369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.343429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.343646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.343689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.343867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.343928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.344168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.344227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.344384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.344444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.344614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.344657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.344856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.344901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.345051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.345094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.345221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.345264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.345433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.345476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.345643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.345686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.345864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.345908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.346076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.346119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.346307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.346350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.346552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.346595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.346760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.346804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.346945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.346988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.157 [2024-07-12 16:03:18.347191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.157 [2024-07-12 16:03:18.347234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.157 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.347398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.347440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.347586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.347633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.347818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.347882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.348095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.348138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.348261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.348303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.348479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.348521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.348716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.348769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.348928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.348990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.349155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.349198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.349401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.349468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.349669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.349712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.349879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.349943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.350091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.350154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.350351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.350411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.350609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.350652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.350837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.350900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.351123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.351184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.351404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.351447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.351613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.351655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.351832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.351895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.352065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.352126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.352302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.352362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.352554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.352596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.352816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.352860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.353041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.353083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.353261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.353304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.353496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.353543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.353735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.353786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.353916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.353958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.354132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.354194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.354352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.354395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.354585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.354628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.354825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.354887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.355080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.158 [2024-07-12 16:03:18.355140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.158 qpair failed and we were unable to recover it. 00:26:21.158 [2024-07-12 16:03:18.355304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.355364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.355531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.355574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.355801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.355845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.356004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.356046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.356240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.356285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.356471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.356514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.356667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.356709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.356886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.356929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.357081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.357124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.357317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.357359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.357573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.357615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.357785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.357828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.357994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.358054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.358258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.358318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.358495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.358547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.358816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.358889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.359124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.359184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.359431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.359491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.359648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.359691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.359895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.359956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.360134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.360195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.360361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.360403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.360560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.360602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.360760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.360805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.360941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.360983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.361143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.361185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.361357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.361399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.361569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.361611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.361797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.361840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.362018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.362061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.362253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.362296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.362452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.362494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.362656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.362698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.362901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.362944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.363147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.363189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.363382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.363424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.363586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.363627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.363788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.363832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.364010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.364072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.364257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.159 [2024-07-12 16:03:18.364318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.159 qpair failed and we were unable to recover it. 00:26:21.159 [2024-07-12 16:03:18.364507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.364549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.364772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.364816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.364987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.365049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.365250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.365311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.365570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.365613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.365805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.365871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.366068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.366128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.366355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.366417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.366578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.366620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.366756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.366799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.366960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.367027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.367174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.367238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.367455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.367505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.367699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.367783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.367990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.368052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.368265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.368336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.368532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.368575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.368728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.368783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.368980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.369042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.369264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.369324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.369514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.369557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.369756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.369818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.369999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.370060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.370261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.370321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.370547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.370589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.370761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.370804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.371077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.371137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.371332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.371393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.371585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.371627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.371808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.371875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.372076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.372137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.372377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.372420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.372635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.372677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.372953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.373016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.373222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.373282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.373427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.373489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.373678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.373721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.373932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.373994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.374139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.374203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.374367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.374428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.160 [2024-07-12 16:03:18.374622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.160 [2024-07-12 16:03:18.374664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.160 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.374847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.374910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.375107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.375169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.375368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.375429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.375582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.375625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.375787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.375831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.376027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.376069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.376276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.376336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.376508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.376550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.376756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.376800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.377061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.377120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.377374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.377435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.377652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.377694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.377880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.377942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.378143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.378203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.378358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.378426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.378587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.378638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.378813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.378846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.378999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.379033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.379214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.379247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.379366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.379399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.379540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.379574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.379680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.379715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.379913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.379947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.380106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.380149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.380367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.380409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.380560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.380602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.380812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.380847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.380990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.381023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.381186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.381229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.381353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.381395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.381578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.381620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.381817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.381851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.382003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.382054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.382258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.382292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.382448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.382491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.382622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.382664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.382858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.382892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.383003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.383053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.383211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.383254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.383437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.383479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.161 qpair failed and we were unable to recover it. 00:26:21.161 [2024-07-12 16:03:18.383676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.161 [2024-07-12 16:03:18.383719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.383878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.383912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.384060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.384093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.384251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.384294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.384486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.384529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.384712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.384755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.384929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.384963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.385118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.385184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.385354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.385425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.385617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.385659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.385858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.385893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.386007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.386040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.386237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.386303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.386497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.386539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.386701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.386747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.386867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.386900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.387062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.387129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.387345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.387378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.387577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.387620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.387796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.387830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.387950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.387983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.388160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.388203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.388435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.388477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.388732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.388774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.388961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.389022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.389188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.389249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.389412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.389473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.389639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.389680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.389896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.389958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.390147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.390190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.390348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.390390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.390555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.390597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.390791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.390835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.390999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.391041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.391208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.391250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.162 [2024-07-12 16:03:18.391447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.162 [2024-07-12 16:03:18.391489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.162 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.391678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.391731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.391935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.391977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.392143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.392185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.392352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.392394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.392561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.392604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.392803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.392847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.393058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.393100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.393237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.393279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.393529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.393571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.393755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.393799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.393996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.394038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.394241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.394303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.394577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.394619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.394809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.394873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.395069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.395130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.395383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.395444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.395651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.395693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.395876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.395939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.396154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.396220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.396440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.396500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.396759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.396803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.397046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.397072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.397331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.397392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.397620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.397663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.397838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.397882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.398104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.398165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.398365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.398428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.398623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.398666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.398883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.398947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.399152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.399222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.399505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.399568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.399789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.399863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.400116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.400183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.400444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.400504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.400682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.400729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.400983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.401049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.401280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.401354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.401549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.401591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.401782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.401826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.402013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.163 [2024-07-12 16:03:18.402075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.163 qpair failed and we were unable to recover it. 00:26:21.163 [2024-07-12 16:03:18.402243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.164 [2024-07-12 16:03:18.402312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.164 qpair failed and we were unable to recover it. 00:26:21.164 [2024-07-12 16:03:18.402513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.164 [2024-07-12 16:03:18.402563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.164 qpair failed and we were unable to recover it. 00:26:21.164 [2024-07-12 16:03:18.402763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.164 [2024-07-12 16:03:18.402831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.164 qpair failed and we were unable to recover it. 00:26:21.164 [2024-07-12 16:03:18.402995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.164 [2024-07-12 16:03:18.403066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.164 qpair failed and we were unable to recover it. 00:26:21.164 [2024-07-12 16:03:18.403269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.164 [2024-07-12 16:03:18.403313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.164 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.403527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.403570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.403806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.403850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.404053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.404086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.404227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.404260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.404394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.404428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.404546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.404582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.404747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.404791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.404941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.404985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.405181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.405224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.405423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.405470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.405634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.405677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.405996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.406048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.406244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.406288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.406498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.406548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.406789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.406856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.407077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.407148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.407363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.407410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.407554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.407598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.407792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.407837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.408016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.408083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.408244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.408286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.408457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.408500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.408708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.408762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.408960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.409022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.409188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.409248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.409406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.409450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.409698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.409752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.410036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.410079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.440 [2024-07-12 16:03:18.410266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.440 [2024-07-12 16:03:18.410309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.440 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.410474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.410517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.410643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.410686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.410889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.410953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.411132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.411174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.411367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.411418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.411673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.411716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.411947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.412012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.412273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.412335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.412550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.412592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.412856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.412918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.413120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.413181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.413404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.413466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.413684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.413726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.413929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.413999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.414139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.414207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.414436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.414499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.414701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.414755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.414889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.414932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.415138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.415206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.415428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.415470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.415665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.415707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.415948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.416012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.416289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.416356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.416568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.416610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.416840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.416903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.417156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.417217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.417475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.417536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.417722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.417785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.417977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.418044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.418265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.418326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.418594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.418655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.418858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.418925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.419133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.419198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.419418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.419479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.419745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.419789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.420043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.420106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.420347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.420408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.420600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.420642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.420849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.420898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.421105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.421169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.441 qpair failed and we were unable to recover it. 00:26:21.441 [2024-07-12 16:03:18.421422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.441 [2024-07-12 16:03:18.421486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.421673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.421716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.421967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.422030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.422284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.422347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.422486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.422555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.422796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.422839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.423049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.423109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.423392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.423467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.423777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.423821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.424065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.424127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.424318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.424380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.424593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.424642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.424846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.424912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.425191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.425260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.425444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.425507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.425746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.425791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.425974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.426041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.426276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.426336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.426551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.426613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.426839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.426902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.427151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.427217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.427427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.427488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.427721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.427774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.427973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.428034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.428204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.428267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.428483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.428544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.428723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.428779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.428989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.429057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.429228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.429298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.429499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.429560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.429804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.429848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.430049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.430111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.430336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.430398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.430599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.430642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.430816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.430880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.431095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.431155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.431386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.431447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.431689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.431767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.432008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.432073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.432297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.432360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.442 [2024-07-12 16:03:18.432568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.442 [2024-07-12 16:03:18.432611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.442 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.432836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.432902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.433126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.433188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.433432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.433500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.433681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.433724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.433915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.433977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.434215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.434276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.434538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.434600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.434870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.434934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.435197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.435261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.435533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.435577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.435841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.435912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.436196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.436258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.436402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.436467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.436682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.436725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.436957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.437019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.437260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.437321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.437591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.437652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.437844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.437907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.438170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.438230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.438474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.438538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.438761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.438806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.439003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.439073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.439310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.439371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.439644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.439706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.440005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.440049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.440327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.440387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.440600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.440664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.440895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.440940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.441222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.441285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.441501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.441563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.441820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.441911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.442177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.442244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.442511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.442575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.442811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.442883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.443156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.443225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.443554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.443621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.443867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.443929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.444192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.444256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.444485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.444548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.444758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.443 [2024-07-12 16:03:18.444801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.443 qpair failed and we were unable to recover it. 00:26:21.443 [2024-07-12 16:03:18.445012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.445081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.445281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.445344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.445616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.445677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.445953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.445997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.446258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.446327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.446600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.446666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.446956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.447027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.447260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.447323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.447504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.447568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.447831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.447908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.448186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.448258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.448502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.448565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.448803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.448866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.449136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.449199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.449461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.449523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.449783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.449827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.450136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.450204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.450464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.450527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.450752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.450796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.451009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.451052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.451265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.451327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.451588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.451652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.451902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.451946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.452229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.452289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.452491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.452557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.452838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.452909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.453172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.453215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.453418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.453480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.453678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.453721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.453996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.454040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.454250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.454293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.454552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.454595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.454827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.454896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.444 [2024-07-12 16:03:18.455164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.444 [2024-07-12 16:03:18.455224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.444 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.455449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.455513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.455705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.455770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.456031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.456075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.456269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.456331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.456562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.456632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.456887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.456953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.457196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.457263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.457519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.457579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.457753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.457797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.457974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.458037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.458286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.458345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.458549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.458597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.458862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.458926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.459209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.459277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.459531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.459596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.459873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.459936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.460160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.460229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.460440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.460504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.460709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.460765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.460959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.461019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.461254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.461315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.461577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.461639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.461908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.461972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.462205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.462266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.462529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.462593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.462868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.462933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.463203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.463265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.463551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.463614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.463881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.463946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.464104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.464168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.464435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.464497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.464701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.464754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.464976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.465039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.465300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.465372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.465601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.445 [2024-07-12 16:03:18.465643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.445 qpair failed and we were unable to recover it. 00:26:21.445 [2024-07-12 16:03:18.465806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.465878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.466140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.466203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.466469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.466531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.466713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.466767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.466993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.467073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.467338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.467401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.467608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.467651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.467900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.467965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.468247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.468311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.468516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.468579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.468813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.468881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.469136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.469199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.469447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.469508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.469766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.469811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.470080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.470148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.470371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.470432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.470637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.470680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.470922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.470985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.471225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.471292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.471530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.471592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.471804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.471875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.472188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.472269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.472541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.472602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.472810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.472880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.473146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.473207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.473437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.473500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.473759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.473803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.474018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.474084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.474363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.474432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.474757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.474801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.475057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.475100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.475426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.475492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.475726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.475789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.476025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.476068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.446 [2024-07-12 16:03:18.476340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.446 [2024-07-12 16:03:18.476406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.446 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.476666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.476749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.476955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.476997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.477290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.477362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.477546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.477612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.477881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.477925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.478198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.478259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.478476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.478538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.478790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.478834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.479010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.479080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.479327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.479386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.479561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.479604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.479845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.479908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.480186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.480249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.480528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.480591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.480861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.480923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.481095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.481159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.481341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.481383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.481588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.481631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.481899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.481965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.482201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.482262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.482496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.482557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.482827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.482921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.483137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.483198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.483464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.483527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.483774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.483818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.483988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.484056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.484280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.484349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.484536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.484598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.484813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.484879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.485161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.485223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.485456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.485519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.485750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.485793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.486069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.486151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.486330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.486393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.486610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.486652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.486825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.486869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.487140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.487203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.487444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.487507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.447 [2024-07-12 16:03:18.487773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.447 [2024-07-12 16:03:18.487817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.447 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.488045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.488106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.488367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.488430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.488607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.488650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.488880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.488943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.489210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.489270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.489507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.489570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.489823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.489888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.490153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.490215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.490487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.490550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.490723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.490779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.491044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.491115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.491380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.491446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.491711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.491777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.492051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.492094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.492295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.492356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.492645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.492708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.492992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.493035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.493314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.493376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.493630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.493691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.493964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.494007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.494302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.494364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.494644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.494706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.494939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.494982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.495147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.495210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.495426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.495488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.495748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.495792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.495951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.495994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.496225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.496275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.496491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.496556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.496799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.496843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.497089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.497152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.448 [2024-07-12 16:03:18.497370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.448 [2024-07-12 16:03:18.497435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.448 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.497686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.497729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.497968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.498031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.498250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.498313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.498567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.498629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.498871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.498942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.499216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.499279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.499549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.499612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.499851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.499914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.500136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.500200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.500533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.500594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.500865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.500931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.501214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.501279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.501504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.501565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.501727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.501801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.501961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.502021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.502338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.502404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.502666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.502709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.502994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.503068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.503337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.503399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.503568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.503610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.503828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.503894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.504121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.504182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.504418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.504483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.504648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.504691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.504934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.504995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.505258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.505320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.505558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.505622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.505894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.505956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.506202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.506271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.506546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.506614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.506819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.506884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.507167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.507229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.507492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.507555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.507779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.449 [2024-07-12 16:03:18.507843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.449 qpair failed and we were unable to recover it. 00:26:21.449 [2024-07-12 16:03:18.508111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.508172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.508392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.508452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.508713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.508775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.509059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.509102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.509361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.509429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.509643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.509686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.509949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.510012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.510276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.510350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.510616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.510691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.510940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.511005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.511244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.511307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.511542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.511604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.511864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.511928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.512161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.512225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.512443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.512508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.512749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.512794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.512937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.513003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.513279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.513344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.513614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.513679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.513906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.513949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.514206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.514269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.514505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.514568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.514840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.514903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.515146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.515209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.515488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.515550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.515811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.515855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.516052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.516128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.516396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.516462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.516736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.516800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.517028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.517071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.517383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.517445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.450 qpair failed and we were unable to recover it. 00:26:21.450 [2024-07-12 16:03:18.517610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.450 [2024-07-12 16:03:18.517652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.517825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.517868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.518049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.518111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.518322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.518384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.518574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.518618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.518815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.518887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.519159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.519222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.519462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.519523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.519767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.519810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.520077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.520147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.520434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.520510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.520773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.520818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.520986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.521055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.521339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.521400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.521642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.521685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.521955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.521998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.522272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.522333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.522601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.522663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.522923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.522967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.523238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.523299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.523526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.523592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.523771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.523815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.524074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.524136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.524406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.524467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.524735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.524799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.525047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.525089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.525366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.525430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.525662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.525706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.525991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.526034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.526255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.526320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.526527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.526589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.526756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.526800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.527054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.527123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.527395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.527462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.527714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.527771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.528003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.528046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.528292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.528353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.528631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.528692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.528962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.529006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.529274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.529334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.529599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.451 [2024-07-12 16:03:18.529661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.451 qpair failed and we were unable to recover it. 00:26:21.451 [2024-07-12 16:03:18.529874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.529918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.530155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.530217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.530487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.530549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.530870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.530915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.531180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.531242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.531508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.531570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.531831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.531875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.532146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.532208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.532476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.532536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.532806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.532856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.533138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.533201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.533476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.533539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.533786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.533829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.534063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.534124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.534396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.534458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.534644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.534686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.534967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.535011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.535262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.535324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.535521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.535581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.535805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.535870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.536133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.536195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.536463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.536525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.536733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.536793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.537070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.537133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.537393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.537455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.537719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.537773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.538007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.538050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.538273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.538335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.538578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.538639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.538825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.538868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.539136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.539198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.539456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.539519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.539794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.539838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.540082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.540143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.540378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.540442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.452 [2024-07-12 16:03:18.540656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.452 [2024-07-12 16:03:18.540698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.452 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.540928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.540972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.541241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.541303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.541505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.541568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.541816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.541880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.542149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.542211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.542428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.542490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.542713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.542765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.543045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.543111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.543310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.543371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.543624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.543667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.543844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.543887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.544148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.544210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.544469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.544532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.544775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.544826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.545056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.545116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.545322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.545384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.545624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.545686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.545962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.546006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.546270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.546331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.546546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.546607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.546880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.546942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.547167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.547229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.547434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.547497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.547718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.547771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.548041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.548104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.548349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.548411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.548668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.548710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.548949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.548993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.549259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.549321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.549556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.549617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.549783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.549842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.550107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.550169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.550440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.550503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.453 qpair failed and we were unable to recover it. 00:26:21.453 [2024-07-12 16:03:18.550765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.453 [2024-07-12 16:03:18.550809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.551080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.551144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.551365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.551428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.551628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.551671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.551893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.551937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.552118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.552179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.552412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.552473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.552736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.552789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.553049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.553113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.553386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.553448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.553638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.553681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.553937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.553980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.554253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.554316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.554599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.554661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.554917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.554961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.555183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.555244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.555425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.555486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.555705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.555759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.556003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.556076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.556350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.556412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.556663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.556712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.556996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.557040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.557304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.557365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.557629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.557691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.557964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.558008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.558268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.558329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.558608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.558668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.558939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.558983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.559265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.559328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.559568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.559629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.559849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.559912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.560180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.560242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.560487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.560549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.560809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.454 [2024-07-12 16:03:18.560853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.454 qpair failed and we were unable to recover it. 00:26:21.454 [2024-07-12 16:03:18.561095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.561156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.561425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.561488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.561695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.561747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.561962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.562006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.562277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.562339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.562572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.562632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.562865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.562909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.563161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.563223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.563450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.563513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.563732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.563788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.564003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.564067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.564339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.564402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.564645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.564688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.564981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.565026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.565279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.565340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.565555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.565618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.565818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.565862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.566144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.566206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.566451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.566514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.566774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.566818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.567037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.567095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.567347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.567408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.567660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.567703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.567931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.567974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.568203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.568264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.568530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.568593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.568868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.568937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.569188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.569248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.569473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.569532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.569787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.569831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.570091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.570134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.570317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.570379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.570602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.570645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.570861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.570925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.571201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.571263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.571548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.571610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.571875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.571937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.572207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.572269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.572534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.572596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.572866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.455 [2024-07-12 16:03:18.572928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.455 qpair failed and we were unable to recover it. 00:26:21.455 [2024-07-12 16:03:18.573199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.573262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.573518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.573580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.573853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.573915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.574177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.574239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.574509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.574573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.574803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.574873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.575127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.575187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.575421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.575481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.575753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.575797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.576059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.576121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.576389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.576450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.576709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.576781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.577054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.577098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.577366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.577434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.577699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.577754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.578008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.578052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.578325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.578385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.578655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.578718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.578966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.579010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.579280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.579342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.579600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.579663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.579896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.579940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.580173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.580234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.580465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.580525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.580780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.580824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.581071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.581133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.581419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.581479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.581755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.581799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.581973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.582016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.582233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.582294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.582561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.582625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.582842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.582886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.583161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.583223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.583474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.583535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.583753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.583797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.583979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.584022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.584281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.584342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.584608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.584671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.584919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.584964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.585230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.585292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.456 qpair failed and we were unable to recover it. 00:26:21.456 [2024-07-12 16:03:18.585578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.456 [2024-07-12 16:03:18.585639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.585911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.585956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.586185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.586248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.586520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.586581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.586825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.586887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.587160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.587223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.587449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.587509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.587757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.587801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.588087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.588150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.588380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.588442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.588700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.588758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.588977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.589021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.589302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.589362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.589642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.589711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.589982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.590025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.590233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.590295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.590513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.590574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.590846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.590909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.591168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.591231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.591498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.591560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.591831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.591895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.592172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.592235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.592449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.592510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.592766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.592810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.593026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.593089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.593349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.593411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.593629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.593673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.593942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.593986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.594235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.594297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.594465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.594527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.594761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.594805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.595074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.595137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.595403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.595464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.595727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.595782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.596043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.596087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.596324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.596386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.596647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.596710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.596992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.597036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.597270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.597332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.597565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.597628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.597895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.457 [2024-07-12 16:03:18.597939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.457 qpair failed and we were unable to recover it. 00:26:21.457 [2024-07-12 16:03:18.598216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.598279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.598556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.598618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.598876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.598920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.599157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.599218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.599491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.599553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.599772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.599816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.600078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.600140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.600414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.600477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.600700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.600752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.601017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.601059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.601310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.601370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.601639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.601703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.602027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.602077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.602309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.602369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.602636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.602698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.602969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.603012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.603282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.603344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.603611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.603673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.603902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.603945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.604213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.604274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.604550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.604612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.604859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.604922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.605132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.605194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.605471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.605534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.605786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.605829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.606091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.606154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.606434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.606494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.606706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.606758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.607016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.607059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.607329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.607392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.607648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.607710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.607940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.607983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.608225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.608286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.608548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.458 [2024-07-12 16:03:18.608608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.458 qpair failed and we were unable to recover it. 00:26:21.458 [2024-07-12 16:03:18.608831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.608875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.609092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.609154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.609378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.609440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.609601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.609644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.609873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.609935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.610209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.610271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.610532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.610595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.610850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.610914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.611178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.611240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.611524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.611585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.611848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.611910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.612175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.612238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.612514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.612576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.612816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.612884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.613119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.613181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.613459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.613522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.613778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.613822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.614005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.614068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.614332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.614401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.614648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.614691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.614979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.615040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.615261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.615322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.615582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.615642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.615913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.615975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.616188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.616250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.616517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.616579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.616856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.616918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.617196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.617256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.617525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.617587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.617857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.617919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.618179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.618238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.618433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.618493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.618733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.618785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.619076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.619146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.619386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.619450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.619716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.619769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.620027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.620069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.620299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.620360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.620638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.620700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.620884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.620927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.459 qpair failed and we were unable to recover it. 00:26:21.459 [2024-07-12 16:03:18.621187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.459 [2024-07-12 16:03:18.621249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.621520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.621582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.621837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.621881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.622147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.622212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.622481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.622542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.622810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.622854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.623111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.623173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.623363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.623406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.623625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.623667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.623890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.623934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.624141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.624203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.624481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.624542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.624815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.624859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.625126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.625186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.625412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.625473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.625706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.625762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.626031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.626074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.626291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.626351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.626622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.626689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.626950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.626993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.627198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.627259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.627531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.627594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.627826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.627889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.628151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.628194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.628466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.628526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.628796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.628839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.629089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.629151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.629397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.629457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.629664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.629707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.629906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.629949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.630214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.630275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.630515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.630577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.630848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.630911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.631173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.631234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.631457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.631519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.631771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.631815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.632087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.632150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.632422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.632482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.632720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.632782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.633057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.633100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.460 qpair failed and we were unable to recover it. 00:26:21.460 [2024-07-12 16:03:18.633371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.460 [2024-07-12 16:03:18.633432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.633685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.633728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.633993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.634036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.634301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.634361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.634638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.634700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.634967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.635010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.635288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.635350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.635591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.635653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.635917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.635961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.636196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.636256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.636525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.636587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.636813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.636879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.637124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.637183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.637442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.637504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.637765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.637808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.638042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.638103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.638294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.638356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.638626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.638686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.638941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.639011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.639242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.639303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.639544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.639605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.639851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.639914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.640184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.640245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.640501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.640563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.640823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.640924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.641131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.641193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.641434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.641495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.641709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.641762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.642001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.642067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.642334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.642396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.642616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.642659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.642919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.642981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.643260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.643321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.643551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.643612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.643884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.643946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.644221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.644283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.644513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.644574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.644853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.644914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.645167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.645229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.645490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.645553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.645809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.461 [2024-07-12 16:03:18.645853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.461 qpair failed and we were unable to recover it. 00:26:21.461 [2024-07-12 16:03:18.646114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.646176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.646455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.646516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.646787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.646830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.647098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.647162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.647443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.647505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.647706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.647759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.648016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.648059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.648333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.648396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.648668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.648731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.648982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.649026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.649294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.649355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.649564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.649626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.649847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.649891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.650131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.650194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.650407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.650469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.650698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.650749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.651018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.651083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.651354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.651424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.651691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.651734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.652007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.652050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.652280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.652352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.652623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.652684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.652966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.653010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.653195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.653257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.653507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.653570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.653871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.653934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.654200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.654267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.654542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.654610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.654877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.654940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.655202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.655268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.655527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.655589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.655866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.655929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.656209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.656270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.656535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.656600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.656857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.656929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.462 qpair failed and we were unable to recover it. 00:26:21.462 [2024-07-12 16:03:18.657189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.462 [2024-07-12 16:03:18.657251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.657542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.657604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.657876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.657939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.658203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.658264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.658543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.658610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.658878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.658942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.659164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.659228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.659483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.659544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.659800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.659844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.660086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.660155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.660428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.660490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.660709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.660763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.661021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.661064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.661340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.661404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.661621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.661665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.661902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.661945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.662214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.662257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.662535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.662579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.662855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.662917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.663151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.663213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.663431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.663494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.663706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.663766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.664001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.664070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.664332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.664405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.664668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.664711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.665001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.665064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.665343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.665404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.665668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.665710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.665970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.666013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.666275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.666338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.666562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.666624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.666881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.666926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.667192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.667266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.667539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.667602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.667865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.667927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.668192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.668252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.668520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.668588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.668857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.668920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.669183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.669245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.669514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.669578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.463 [2024-07-12 16:03:18.669869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.463 [2024-07-12 16:03:18.669934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.463 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.670208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.670273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.670540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.670601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.670827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.670893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.671176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.671238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.671515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.671585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.671822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.671887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.672152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.672222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.672479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.672540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.672822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.672886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.673115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.673185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.673461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.673523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.673790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.673833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.674109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.674189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.674457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.674530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.674785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.674830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.675108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.675182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.675452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.675516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.675735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.675787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.675989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.676031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.676307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.676368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.676629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.676691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.676861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.676911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.677168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.677230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.677512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.677576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.677831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.677874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.678158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.678221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.678499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.678559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.678799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.678843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.679086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.679145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.679350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.679411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.679657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.679700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.679988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.680059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.680329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.680391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.680650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.680693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.680983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.681047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.681325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.681387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.681611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.681654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.681869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.681932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.682186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.682249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.682506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.464 [2024-07-12 16:03:18.682576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.464 qpair failed and we were unable to recover it. 00:26:21.464 [2024-07-12 16:03:18.682841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.682906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.683174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.683236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.683521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.683583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.683859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.683920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.684104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.684166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.684399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.684460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.684673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.684716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.684998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.685071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.685342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.685404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.685661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.685703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.685956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.686019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.686250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.686312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.686567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.686628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.686852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.686916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.687177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.687249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.687464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.687528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.687731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.687796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.688000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.688077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.688340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.688403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.688652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.688694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.688953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.688996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.689229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.689300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.689569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.689631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.689891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.689935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.690157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.690219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.690482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.690544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.690805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.690849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.691086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.691158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.691424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.691497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.691758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.691802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.692019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.692061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.692311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.692373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.692640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.692703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.692974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.693017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.693282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.693344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.693621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.693684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.693921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.693965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.694172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.694240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.694521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.694582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.694817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.694888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.465 [2024-07-12 16:03:18.695128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.465 [2024-07-12 16:03:18.695191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.465 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.695460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.695523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.695783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.695827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.696018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.696082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.696347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.696409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.696628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.696671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.696936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.696979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.697249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.697310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.697554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.697616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.697845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.697911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.698181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.698244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.698510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.698573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.698831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.698897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.699166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.699229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.699449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.699512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.699724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.699776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.700043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.700106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.700374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.700436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.700694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.700746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.701026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.701069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.701340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.701404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.701662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.701711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.701998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.702040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.702306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.702376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.702671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.702713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.702950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.702993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.703271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.703314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.703576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.703638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.703902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.703946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.704204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.704268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.704526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.704586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.704813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.704882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.705155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.705217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.705476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.705537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.705820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.705863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.706078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.706147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.706416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.706478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.706730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.706783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.707008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.707051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.707322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.707387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.707654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.707716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.466 [2024-07-12 16:03:18.708000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.466 [2024-07-12 16:03:18.708044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.466 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.708315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.708377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.708656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.708719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.708993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.709036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.709263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.709325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.709587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.709652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.709920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.709964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.710253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.710313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.710590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.710651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.710914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.710957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.711199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.711262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.711543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.711607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.711874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.711938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.712113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.712176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.712452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.712514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.712773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.712816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.713087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.713149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.713432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.713499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.713761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.713805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.714061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.714104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.714306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.714376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.714650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.714715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.715000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.715043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.715285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.715353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.715625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.715685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.467 [2024-07-12 16:03:18.715964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.467 [2024-07-12 16:03:18.716008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.467 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.716273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.716334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.716614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.716675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.716945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.716988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.717256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.717316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.717597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.717659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.717853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.717897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.718173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.718240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.718510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.718572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.718819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.718884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.719161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.719222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.719520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.719566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.719843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.719906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.720183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.720247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.720471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.720536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.720806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.720850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.721127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.721188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.721479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.721525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.721755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.745 [2024-07-12 16:03:18.721800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.745 qpair failed and we were unable to recover it. 00:26:21.745 [2024-07-12 16:03:18.722026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.722076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.722347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.722408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.722684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.722728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.722974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.723018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.723258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.723322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.723594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.723657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.723916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.723961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.724241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.724303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.724580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.724651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.724927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.724972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.725206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.725273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.725502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.725569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.725805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.725877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.726155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.726218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.726453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.726521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.726734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.726789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.727000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.727078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.727315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.727376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.727543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.727590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.727823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.727893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.728175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.728237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.728517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.728581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.728823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.728891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.729175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.729238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.729513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.729582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.729857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.729928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.730161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.730228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.730503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.730570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.730860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.730926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.731203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.731268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.731560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.731627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.731908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.731975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.732244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.732305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.732573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.732623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.732860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.732926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.733209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.733279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.733549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.733611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.733890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.733958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.734241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.734308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.734529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.746 [2024-07-12 16:03:18.734595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.746 qpair failed and we were unable to recover it. 00:26:21.746 [2024-07-12 16:03:18.734883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.734951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.735249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.735317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.735594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.735637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.735912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.735992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.736273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.736342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.736614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.736657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.736957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.737024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.737307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.737370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.737587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.737636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.737922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.737988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.738271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.738338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.738587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.738658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.738946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.739023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.739308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.739375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.739643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.739689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.739987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.740053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.740332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.740402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.740674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.740720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.740988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.741063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.741327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.741393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.741672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.741718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.742003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.742070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.742340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.742404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.742624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.742671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.742972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.743046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.743323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.743391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.743657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.743702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.743992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.744055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.744282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.744345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.744611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.744674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.744960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.745024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.745314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.745375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.745642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.745704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.746059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.746158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.746450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.746519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.746828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.746909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.747219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.747283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.747550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.747 [2024-07-12 16:03:18.747614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.747 qpair failed and we were unable to recover it. 00:26:21.747 [2024-07-12 16:03:18.747913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.747959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.748252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.748317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.748622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.748683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.748989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.749032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.749336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.749400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.749698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.749802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.750087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.750151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.750431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.750494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.750798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.750841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.751118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.751181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.751489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.751552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.751866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.751908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.752141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.752204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.752500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.752563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.752858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.752901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.753156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.753197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.753506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.753569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.753858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.753901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.754176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.754239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.754568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.754630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.754946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.754989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.755292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.755355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.755661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.755723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.756028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.756070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.756326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.756387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.756693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.756788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.757067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.757130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.757443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.757505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.757800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.757858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.758120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.758183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.758447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.758509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.758732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.758786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.759016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.759106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.759398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.759460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.759799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.759842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.760098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.760140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.760391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.760452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.760768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.760827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.761054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.761132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.761445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.761506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.761815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.748 [2024-07-12 16:03:18.761880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.748 qpair failed and we were unable to recover it. 00:26:21.748 [2024-07-12 16:03:18.762177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.762240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.762536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.762598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.762898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.762963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.763273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.763335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.763639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.763701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.764011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.764044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.764250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.764282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.764472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.764505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.764690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.764722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.764882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.764914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.765118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.765151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.765347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.765379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.765633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.765665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.765903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.765937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.766103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.766135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.766371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.766403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.766653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.766685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.766935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.766968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.767179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.767211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.767454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.767487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.767682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.767715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.767917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.767950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.768196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.768228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.768435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.768468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.768660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.768693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.768898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.768930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.769128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.769159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.769294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.769325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.769515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.769547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.769704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.769735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.769980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.770011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.770245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.770276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.770449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.770485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.749 [2024-07-12 16:03:18.770718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.749 [2024-07-12 16:03:18.770756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.749 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.770992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.771023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.771257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.771288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.771523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.771555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.771695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.771726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.771934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.771965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.772129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.772159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.772357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.772387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.772623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.772653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.772799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.772830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.773030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.773059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.773296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.773326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.773567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.773597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.773834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.773865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.774044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.774121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.774420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.774482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.774803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.774833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.775085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.775146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.775402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.775464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.775798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.775829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.776073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.776137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.776429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.776491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.776799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.776830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.777007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.777071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.777378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.777440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.777734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.777823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.778076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.778148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.778401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.778463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.778767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.778819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.779086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.779147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.779406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.779468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.779726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.779798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.779997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.780050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.780359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.780422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.780720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.780812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.781075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.781137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.781439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.781502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.781806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.781836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.782092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.782154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.782456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.782519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.782770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.782832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.783025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.750 [2024-07-12 16:03:18.783097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.750 qpair failed and we were unable to recover it. 00:26:21.750 [2024-07-12 16:03:18.783404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.783467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.783785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.783816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.784074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.784138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.784456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.784520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.784822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.784852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.785067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.785130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.785400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.785464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.785680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.785757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.786021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.786092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.786349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.786411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.786687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.786766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.786967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.787001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.787224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.787286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.787589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.787651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.787958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.787989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.788262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.788324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.788622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.788683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.788998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.789029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.789347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.789409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.789661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.789723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.789977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.790008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.790265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.790328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.790589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.790651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.790963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.790994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.791214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.791276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.791587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.791649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.791938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.791969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.792169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.792231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.792489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.792551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.792848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.792880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.793084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.793146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.793399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.793461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.793766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.793819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.793996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.794045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.794341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.794403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.794677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.794752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.795030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.795102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.795403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.795465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.795726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.795806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.796017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.796088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.751 [2024-07-12 16:03:18.796378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.751 [2024-07-12 16:03:18.796441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.751 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.796702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.796787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.796981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.797011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.797250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.797313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.797612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.797673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.797997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.798046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.798343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.798405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.798709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.798811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.799075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.799137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.799441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.799503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.799811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.799842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.800085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.800147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.800450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.800513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.800815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.800879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.801136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.801198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.801457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.801520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.801828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.801891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.802191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.802252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.802502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.802563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.802874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.802939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.803239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.803300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.803593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.803656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.804003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.804067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.804369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.804432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.804736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.804816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.805063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.805126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.805429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.805492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.805796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.805861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.806159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.806221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.806519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.806581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.806901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.806965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.807271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.807334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.807642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.807704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.807967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.808030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.808325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.808388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.808665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.808727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.809014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.809076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.809317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.809379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.809681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.809758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.810059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.810137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.810385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.810447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.810761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.752 [2024-07-12 16:03:18.810824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.752 qpair failed and we were unable to recover it. 00:26:21.752 [2024-07-12 16:03:18.811118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.811181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.811480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.811542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.811839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.811903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.812165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.812227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.812455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.812517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.812822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.812886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.813147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.813209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.813505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.813567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.813830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.813893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.814192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.814254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.814557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.814619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.814948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.815012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.815313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.815375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.815630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.815692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.816012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.816075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.816366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.816428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.816731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.816812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.817108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.817170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.817484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.817546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.817852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.817916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.818226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.818288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.818587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.818650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.818964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.819027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.819318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.819380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.819670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.819754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.820057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.820120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.820395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.820456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.820767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.820830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.821132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.821194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.821494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.821556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.821855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.821919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.822222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.822285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.822579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.822641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.822920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.822983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.823245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.823308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.823606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.823668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.823974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.824037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.824305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.824369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.824677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.824753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.825069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.825131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.825434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.753 [2024-07-12 16:03:18.825496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.753 qpair failed and we were unable to recover it. 00:26:21.753 [2024-07-12 16:03:18.825804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.825868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.826116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.826178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.826481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.826543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.826806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.826870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.827087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.827150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.827402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.827464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.827691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.827764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.828075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.828137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.828433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.828495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.828800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.828863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.829170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.829241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.829540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.829602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.829915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.829979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.830276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.830338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.830633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.830696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.831024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.831087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.831382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.831444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.831708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.831783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.832098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.832161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.832461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.832523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.832833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.832895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.833212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.833274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.833522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.833584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.833881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.833944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.834256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.834319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.834611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.834673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.834991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.835054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.835359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.835421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.835733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.835809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.836059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.754 [2024-07-12 16:03:18.836121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.754 qpair failed and we were unable to recover it. 00:26:21.754 [2024-07-12 16:03:18.836379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.836442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.836698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.836773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.837072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.837134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.837430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.837492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.837781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.837845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.838142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.838205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.838511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.838573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.838876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.838941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.839256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.839318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.839573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.839634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.839903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.839966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.840254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.840316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.840613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.840674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.840984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.841047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.841337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.841399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.841663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.841726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.842010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.842072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.842373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.842434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.842692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.842771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.843029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.843091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.843384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.843446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.843697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.843786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.846961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.847025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.847344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.847407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.847664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.847726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.848064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.848126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.848418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.848480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.848782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.848846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.849140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.849202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.849493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.849555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.849831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.849895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.850172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.850235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.850535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.850597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.850911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.850975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.851278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.851341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.851646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.851707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.851971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.852034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.852328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.852391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.852663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.852724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.853040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.853103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.853400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.755 [2024-07-12 16:03:18.853462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.755 qpair failed and we were unable to recover it. 00:26:21.755 [2024-07-12 16:03:18.853793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.853857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.854162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.854224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.854519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.854582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.854848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.854911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.855165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.855225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.855486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.855548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.855821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.855886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.856189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.856261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.856531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.856595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.856869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.856933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.857195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.857257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.857512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.857574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.857883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.857948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.858164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.858227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.858522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.858585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.858847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.858913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.859214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.859277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.859525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.859587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.859901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.859967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.860219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.860282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.860537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.860600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.860861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.860927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.861176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.861239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.861545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.861609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.861941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.862006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.862301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.862364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.862673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.862754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.863030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.863094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.863293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.863357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.863667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.863731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.864062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.864125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.864423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.864487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.864789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.864855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.865070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.865140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.865434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.865508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.865733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.865819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.866109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.866174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.866466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.866531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.866853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.866919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.867178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.867241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.756 [2024-07-12 16:03:18.867550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.756 [2024-07-12 16:03:18.867614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.756 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.867877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.867941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.868236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.868299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.868559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.868622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.868919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.868984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.869291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.869355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.869655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.869718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.869994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.870058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.870374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.870438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.870756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.870821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.871083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.871145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.871397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.871462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.871767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.871832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.872131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.872194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.872515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.872579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.872883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.872950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.873248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.873311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.873622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.873685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.874025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.874090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.874384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.874448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.874707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.874788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.875098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.875162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.875466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.875530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.875841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.875907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.876212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.876275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.876582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.876646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.876966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.877032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.877335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.877398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.877693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.877784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.878092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.878157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.878456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.878520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.878824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.878889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.879196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.879260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.879513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.879577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.879879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.879944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.880241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.880304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.880598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.880661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.880932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.880998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.881303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.881365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.881585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.881648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.881985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.882050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.757 [2024-07-12 16:03:18.882300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.757 [2024-07-12 16:03:18.882363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.757 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.882664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.882727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.883059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.883123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.883384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.883448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.883760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.883826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.884133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.884197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.884492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.884555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.884862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.884927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.885196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.885260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.885518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.885581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.885855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.885920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.886217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.886281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.886534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.886598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.886853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.886918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.887218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.887282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.887576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.887640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.887959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.888023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.888340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.888404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.888708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.888790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.889062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.889126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.889432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.889497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.889793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.889868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.890181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.890245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.890548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.890612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.890867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.890934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.891230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.891294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.758 qpair failed and we were unable to recover it. 00:26:21.758 [2024-07-12 16:03:18.891592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.758 [2024-07-12 16:03:18.891655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.891927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.891991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.892299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.892363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.892663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.892726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.893001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.893064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.893317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.893381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.893622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.893686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.894013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.894078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.894332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.894395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.894698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.894780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.895002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.895066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.895365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.895429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.895680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.895762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.896066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.896130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.896449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.896512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.896772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.896837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.897089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.897153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.897449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.897513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.897815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.897880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.898141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.898205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.898519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.898582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.898879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.898944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.899201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.899274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.899589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.899652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.899963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.900028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.900293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.900358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.900658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.900721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.901030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.901094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.901393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.901457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.901772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.901837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.902143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.902207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.902468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.902531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.902797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.902863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.903124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.903188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.903444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.903507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.903767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.903832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.904151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.904215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.904468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.904531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.904831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.904896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.905156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.905219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.905476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.905540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.759 qpair failed and we were unable to recover it. 00:26:21.759 [2024-07-12 16:03:18.905801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.759 [2024-07-12 16:03:18.905867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.906173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.906237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.906545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.906609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.906895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.906961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.907276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.907340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.907652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.907716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.908044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.908107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.908376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.908439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.908690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.908767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.909049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.909112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.909412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.909475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.909778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.909843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.910140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.910204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.910466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.910529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.910833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.910898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.911207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.911271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.911523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.911586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.911841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.911907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.912158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.912222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.912475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.912539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.912850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.912914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.913220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.913283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.913513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.913576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.913881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.913947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.914249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.914312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.914569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.914633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.914946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.915012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.915311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.915374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.915674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.915755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.916014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.916078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.916384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.916447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.916760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.916826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.917119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.917183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.917473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.917537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.917803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.917869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.918162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.918226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.918497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.918560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.918860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.918925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.919245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.919308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.919613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.919676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.919997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.760 [2024-07-12 16:03:18.920062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.760 qpair failed and we were unable to recover it. 00:26:21.760 [2024-07-12 16:03:18.920373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.920436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.920692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.920766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.921042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.921107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.921368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.921430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.921636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.921700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.922020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.922083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.922380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.922444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.922757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.922823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.923136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.923210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.923433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.923496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.923807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.923872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.924171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.924234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.924505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.924569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.924868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.924933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.925148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.925211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.925473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.925536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.925789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.925855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.926174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.926239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.926491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.926554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.926816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.926881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.927129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.927192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.927486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.927550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.927860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.927926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.928229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.928292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.928547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.928611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.928883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.928948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.929190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.929253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.929477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.929540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.929855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.929920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.930229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.930292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.930588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.930652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.930889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.930954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.931251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.931315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.931611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.931675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.931963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.932028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.932335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.932407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.932713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.932796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.933062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.933126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.933375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.933439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.933734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.933824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.934123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.934187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.761 qpair failed and we were unable to recover it. 00:26:21.761 [2024-07-12 16:03:18.934485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.761 [2024-07-12 16:03:18.934549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.934843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.934909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.935205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.935269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.935521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.935585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.935884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.935948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.936210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.936274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.936582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.936645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.936914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.936979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.937298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.937362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.937667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.937730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.938043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.938107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.938403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.938466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.938771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.938835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.939132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.939196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.939455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.939519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.939778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.939844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.940152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.940215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.940481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.940544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.940842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.940907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.941214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.941278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.941601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.941664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.941999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.942073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.942371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.942434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.942766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.942831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.943141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.943204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.943514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.943576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.943881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.943947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.944267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.944331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.944634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.944697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.945018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.945082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.945388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.945452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.945767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.945833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.946130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.946193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.946453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.946518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.946787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.946853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.947172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.947235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.947497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.947560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.947860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.947926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.948226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.948289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.948587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.948651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.948966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.949031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.762 [2024-07-12 16:03:18.949351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.762 [2024-07-12 16:03:18.949415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.762 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.949715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.949810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.950068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.950132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.950441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.950504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.950768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.950833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.951097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.951161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.951469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.951533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.951852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.951918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.952230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.952294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.952557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.952621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.952928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.952993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.953246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.953308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.953572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.953635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.953896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.953960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.954220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.954283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.954596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.954660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.954928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.954993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.955301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.955364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.955665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.955729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.956047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.956111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.956411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.956475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.956787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.956852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.957152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.957216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.957485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.957549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.957851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.957916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.958174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.958238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.958542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.958606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.958909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.958975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.959276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.959339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.959637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.959700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.960014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.960078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.960375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.960438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.960706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.960789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.961103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.961167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.961473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.763 [2024-07-12 16:03:18.961537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.763 qpair failed and we were unable to recover it. 00:26:21.763 [2024-07-12 16:03:18.961849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.961915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.962131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.962195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.962456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.962520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.962819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.962885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.963200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.963264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.963466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.963530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.963766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.963831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.964133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.964197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.964504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.964566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.964883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.964948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.965206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.965270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.965572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.965635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.965986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.966051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.966307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.966380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.966630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.966694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.967019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.967082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.967334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.967398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.967662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.967727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.968054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.968118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.968369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.968433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.968756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.968821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.969082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.969145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.969455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.969518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.969825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.969889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.970183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.970246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.970556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.970620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.970925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.970989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.971303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.971367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.971585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.971649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.971962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.972026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.972278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.972341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.972584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.972647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.972931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.972996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.973302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.973366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.973666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.973729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.974016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.974081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.974384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.974447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.974769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.974835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.975044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.975107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.975365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.975429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.975756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.764 [2024-07-12 16:03:18.975832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.764 qpair failed and we were unable to recover it. 00:26:21.764 [2024-07-12 16:03:18.976146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.976209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.976517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.976580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.976853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.976918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.977225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.977289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.977557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.977620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.977933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.977997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.978302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.978365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.978674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.978760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.979018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.979082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.979390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.979454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.979776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.979841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.980143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.980207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.980505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.980568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.980887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.980952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.981251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.981315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.981577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.981640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.981925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.981991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.982297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.982361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.982580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.982644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.982964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.983031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.983284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.983348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.983657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.983720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.984047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.984112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.984415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.984479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.984791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.984857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.985123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.985186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.985490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.985554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.985864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.985929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.986250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.986314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.986619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.986682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.986999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.987064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.987336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.987400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.987665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.987728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.988002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.988067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.988364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.988428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.988677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.988757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.989061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.989124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.989436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.989500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.989811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.989876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.990143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.990207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.765 qpair failed and we were unable to recover it. 00:26:21.765 [2024-07-12 16:03:18.990474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.765 [2024-07-12 16:03:18.990539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.990844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.990910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.991210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.991274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.991541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.991604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.991844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.991909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.992206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.992270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.992591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.992654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.992976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.993042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.993345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.993409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.993685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.993780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.994080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.994144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.994412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.994474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.994786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.994852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.995156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.995220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.995483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.995547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.995847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.995912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.996180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.996244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.996548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.996612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.996914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.996978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.997230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.997293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.997550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.997613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.997926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.997991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.998261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.998325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.998635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.998698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.998978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.999042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.999362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.999425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:18.999727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:18.999808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.000120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:19.000193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.000496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:19.000560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.000859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:19.000925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.001182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:19.001246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.001505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:19.001569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.001871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:19.001937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.002209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:19.002273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.002573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:19.002636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.002917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:19.002982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.003279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:19.003343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.003645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:19.003709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.004032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.766 [2024-07-12 16:03:19.004096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.766 qpair failed and we were unable to recover it. 00:26:21.766 [2024-07-12 16:03:19.004362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.004426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.004682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.004762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.005037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.005101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.005406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.005470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.005772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.005839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.006144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.006208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.006514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.006577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.006847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.006913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.007211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.007275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.007584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.007649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.007892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.007957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.008254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.008318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.008618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.008683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.008954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.009019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.009283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.009346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.009644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.009718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.010033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.010098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.010394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.010458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.010771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.010836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.011128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.011193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.011494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.011558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.011864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.011929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.012192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.012256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.012494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.012558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.012854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.012918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.013232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.013296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.013592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.013657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.013974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.014040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.014346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.014409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.014634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.014698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.014994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.015059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.767 [2024-07-12 16:03:19.015364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.767 [2024-07-12 16:03:19.015429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.767 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.015729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.015828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.016134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.016198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.016464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.016528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.016784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.016850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.017157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.017221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.017536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.017600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.017907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.017971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.018278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.018342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.018646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.018710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.019028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.019092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.019411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.019484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.019796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.019861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.020130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.020193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.020452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.020516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.020773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.020839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.021101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.021164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.021468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.021532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.021805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.021872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.022134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.022197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.022450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.022513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.022811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.022876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.023167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.023230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.023526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.023590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.023895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.023959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.024288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.024352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.024653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.024717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.024988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.025052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.025310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.025374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.025674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.025756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.026051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.026116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.026256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.026287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.026470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.026502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.026665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.026697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.026945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.026981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.027187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.027223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.027458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.027494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.027735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.027781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.027981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.028017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.028181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.028217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.028399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.028435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.768 [2024-07-12 16:03:19.028674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.768 [2024-07-12 16:03:19.028710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.768 qpair failed and we were unable to recover it. 00:26:21.769 [2024-07-12 16:03:19.028931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.769 [2024-07-12 16:03:19.028968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.769 qpair failed and we were unable to recover it. 00:26:21.769 [2024-07-12 16:03:19.029175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.769 [2024-07-12 16:03:19.029211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.769 qpair failed and we were unable to recover it. 00:26:21.769 [2024-07-12 16:03:19.029388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.769 [2024-07-12 16:03:19.029424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.769 qpair failed and we were unable to recover it. 00:26:21.769 [2024-07-12 16:03:19.029619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.769 [2024-07-12 16:03:19.029654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.769 qpair failed and we were unable to recover it. 00:26:21.769 [2024-07-12 16:03:19.029863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.769 [2024-07-12 16:03:19.029900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:21.769 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.030083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.030119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.030333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.030369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.030558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.030593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.030836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.030873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.031067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.031103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.031344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.031380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.031580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.031616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.031856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.031892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.032125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.032161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.032307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.032343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.032586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.032622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.032861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.032897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.033133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.033169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.033407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.033442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.033674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.033710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.033975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.034012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.034328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.034391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.034696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.034778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.035040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.035118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.035438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.035503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.035812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.035849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.036104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.036168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.036473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.036537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.036836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.036873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.037129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.037193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.037448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.037512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.037768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.037827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.047 [2024-07-12 16:03:19.038002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.047 [2024-07-12 16:03:19.038062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.047 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.038311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.038375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.038694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.038800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.039072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.039136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.039441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.039505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.039804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.039846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.040072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.040136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.040433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.040496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.040808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.040845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.041105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.041169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.041475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.041539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.041808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.041843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.042099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.042163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.042432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.042496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.042801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.042837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.043099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.043162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.043464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.043527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.043826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.043863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.044062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.044127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.044441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.044504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.044791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.044828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.045076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.045140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.045433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.045497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.045808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.045844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.045994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.046027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.046349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.046412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.046691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.046796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.047036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.047095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.047388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.047452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.047710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.047796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.048061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.048125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.048376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.048440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.048705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.048796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.049110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.049174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.049431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.049495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.049796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.049861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.050169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.050233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.050490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.050554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.050828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.050893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.051198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.048 [2024-07-12 16:03:19.051262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.048 qpair failed and we were unable to recover it. 00:26:22.048 [2024-07-12 16:03:19.051575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.051639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.051945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.052009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.052267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.052331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.052600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.052662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.052981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.053046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.053361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.053424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.053694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.053776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.054075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.054139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.054407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.054471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.054669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.054732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.055065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.055130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.055424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.055488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.055800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.055867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.056129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.056192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.056497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.056561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.056812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.056877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.057180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.057245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.057545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.057608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.057881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.057946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.058249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.058313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.058591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.058655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.058966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.059030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.059337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.059400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.059702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.059784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.060090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.060154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.060445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.060508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.060803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.060870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.061180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.061244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.061544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.061607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.061915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.061980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.062275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.062339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.062648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.062711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.063010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.063075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.063399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.063464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.063768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.063833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.064092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.064157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.064408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.064472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.064780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.064846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.065153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.065218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.065477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.065540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.065801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.065865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.049 qpair failed and we were unable to recover it. 00:26:22.049 [2024-07-12 16:03:19.066126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.049 [2024-07-12 16:03:19.066189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.066451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.066515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.066776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.066841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.067134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.067198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.067463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.067526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.067821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.067886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.068196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.068260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.068577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.068641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.068956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.069021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.069329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.069392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.069689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.069769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.070034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.070099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.070400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.070463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.070722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.070820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.071130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.071194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.071507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.071571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.071823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.071889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.072187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.072252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.072465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.072528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.072819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.072893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.073193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.073257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.073569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.073632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.073896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.073961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.074256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.074320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.074628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.074691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.074953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.075018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.075278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.075343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.075602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.075666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.076001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.076066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.076364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.076427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.076723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.076805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.077125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.077189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.077503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.077566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.077883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.077950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.078208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.078272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.078534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.078597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.078910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.078974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.079234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.079297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.079582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.079644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.079913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.079978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.080294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.050 [2024-07-12 16:03:19.080362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.050 qpair failed and we were unable to recover it. 00:26:22.050 [2024-07-12 16:03:19.080585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.080651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.080905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.080969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.081178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.081242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.081420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.081483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.081721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.081803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.082014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.082089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.082330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.082393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.082677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.082759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.082991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.083056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.083359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.083424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.083690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.083775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.083972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.084036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.084243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.084306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.084516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.084579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.084827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.084893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.085140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.085204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.085439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.085503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.085754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.085819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.086026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.086089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.086351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.086415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.086630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.086694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.086881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.086944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.087141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.087204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.087442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.087506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.087721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.087807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.088019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.088084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.088292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.088357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.088563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.088624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.088834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.088899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.089137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.089200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.089434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.089498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.089681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.089762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.089979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.090052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.051 [2024-07-12 16:03:19.090261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.051 [2024-07-12 16:03:19.090325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.051 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.090535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.090600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.090805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.090871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.091107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.091171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.091400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.091464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.091699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.091782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.091996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.092060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.092292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.092355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.092564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.092627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.092872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.092938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.093171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.093235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.093466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.093528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.093773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.093839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.094082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.094146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.094356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.094419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.094622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.094686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.094937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.095002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.095234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.095298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.095498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.095561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.095770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.095836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.096069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.096133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.096342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.096406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.096605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.096668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.096901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.096967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.097171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.097235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.097475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.097538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.097771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.097836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.098056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.098121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.098314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.098377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.098555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.098619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.098853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.098919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.099150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.099213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.099388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.099451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.099656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.099719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.099962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.100026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.100252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.100315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.100520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.100582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.100796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.100860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.101041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.101105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.101299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.101362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.052 [2024-07-12 16:03:19.101572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.052 [2024-07-12 16:03:19.101635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.052 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.101854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.101919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.102153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.102217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.102457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.102520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.102772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.102837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.103043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.103107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.103336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.103399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.103593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.103656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.103853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.103917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.104114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.104178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.104384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.104447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.104644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.104707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.104940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.105004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.105233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.105296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.105543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.105607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.105834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.105900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.106097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.106161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.106341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.106405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.106633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.106696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.106951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.107014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.107218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.107282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.107516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.107580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.107789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.107854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.108056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.108119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.108356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.108419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.108624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.108687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.108904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.108968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.109168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.109240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.109472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.109536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.109788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.109853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.110061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.110126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.110354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.110416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.110648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.110712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.110905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.110968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.111196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.111259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.111487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.111550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.111772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.111837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.112065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.112130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.112360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.112424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.112625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.112688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.112910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.112975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.113215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.053 [2024-07-12 16:03:19.113279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.053 qpair failed and we were unable to recover it. 00:26:22.053 [2024-07-12 16:03:19.113508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.113570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.113799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.113864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.114092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.114156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.114356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.114420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.114622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.114686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.114932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.114996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.115206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.115269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.115507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.115570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.115772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.115837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.116072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.116136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.116377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.116441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.116668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.116731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.116977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.117050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.117281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.117345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.117584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.117648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.117878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.117943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.118178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.118242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.118475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.118539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.118771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.118836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.119014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.119078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.119282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.119346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.119595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.119660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.119883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.119948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.120175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.120239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.120450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.120514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.120717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.120798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.121052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.121116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.121362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.121426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.121655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.121719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.121952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.122016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.122245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.122308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.122546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.122609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.122838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.122903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.123143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.123206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.123432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.123495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.123722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.123799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.123981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.124044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.124266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.124330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.124532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.124595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.054 [2024-07-12 16:03:19.124792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.054 [2024-07-12 16:03:19.124856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.054 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.125039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.125103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.125304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.125368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.125580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.125644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.125908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.125977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.126245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.126310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.126598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.126662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.126876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.126941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.127216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.127279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.127576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.127640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.127862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.127927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.128138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.128201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.128423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.128493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.128699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.128790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.128973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.129040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.129212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.129275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.129484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.129548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.129728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.129830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.130056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.130120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.130331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.130395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.130633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.130697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.130896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.130959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.131166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.131231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.131488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.131553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.131783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.131848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.132068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.132132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.132354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.132424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.132728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.132815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.133098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.133164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.133398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.133462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.133779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.133845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.134102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.134168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.134476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.134540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.134828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.134892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.135132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.135195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.135440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.135503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.135710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.135804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.135970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.136046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.136282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.055 [2024-07-12 16:03:19.136345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.055 qpair failed and we were unable to recover it. 00:26:22.055 [2024-07-12 16:03:19.136597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.136660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.136895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.136959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.137197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.137270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.137522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.137586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.137822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.137888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.138116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.138180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.138377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.138441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.138610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.138673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 864321 Killed "${NVMF_APP[@]}" "$@" 00:26:22.056 [2024-07-12 16:03:19.138876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.138943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.139142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.139210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.139424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.139487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:22.056 [2024-07-12 16:03:19.139758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.139823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:22.056 [2024-07-12 16:03:19.139993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.140058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:22.056 [2024-07-12 16:03:19.140368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.140431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:22.056 [2024-07-12 16:03:19.140689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.140779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.056 [2024-07-12 16:03:19.140982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.141045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.141342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.141405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.141680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.141761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.141945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.141980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.142130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.142175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.142278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.142313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.142453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.142488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.142606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.142641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.142840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.142876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.142999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.143033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.143148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.143184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.143297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.143332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.143557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.143593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.143756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.143807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.143919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.143952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.144128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.144161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.144313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.144356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.144595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.144630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.144840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.144874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.056 [2024-07-12 16:03:19.144976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.056 [2024-07-12 16:03:19.145010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.056 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.145268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.145329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.145628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.145691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.145917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.145952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.146208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.146271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=864874 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:22.057 [2024-07-12 16:03:19.146571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 864874 00:26:22.057 [2024-07-12 16:03:19.146644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.146856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.146890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 864874 ']' 00:26:22.057 [2024-07-12 16:03:19.147064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.147095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.057 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:22.057 [2024-07-12 16:03:19.147277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.147310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.057 [2024-07-12 16:03:19.147540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:22.057 [2024-07-12 16:03:19.147579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.147688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.057 [2024-07-12 16:03:19.147721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.147855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.147889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.148098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.148131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.148274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.148318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.148492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.148528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.148718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.148770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.148898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.148931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.149087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.149120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.149313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.149347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.149490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.149523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.149674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.149708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.149862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.149894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.150005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.150038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.150172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.150215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.150390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.150422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.150606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.150638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.150790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.150824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.150939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.150971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.151072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.151104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.151292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.151325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.151434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.151466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.151643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.151676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.151822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.151855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.151955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.151993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.152136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.152168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.152303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.152336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.057 [2024-07-12 16:03:19.152487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.057 [2024-07-12 16:03:19.152528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.057 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.152670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.152702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.152889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.152920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.153063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.153094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.153238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.153269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.153433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.153464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.153617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.153647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.153814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.153846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.154006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.154037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.154209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.154240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.154376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.154407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.154580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.154611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.154723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.154762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.154893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.154924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.155079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.155110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.155251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.155282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.155421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.155455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.155627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.155658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.155771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.155809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.155939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.155970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.156115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.156146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.156259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.156290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.156455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.156486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.156621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.156653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.156824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.156854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.156988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.157018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.157152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.157181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.157355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.157384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.157516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.157546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.157667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.157697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.157796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.157826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.157960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.157990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.158120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.158150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.158308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.158338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.158441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.158471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.158627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.158657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.158795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.158826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.158958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.158989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.159109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.159138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.159280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.159310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.159444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.159474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.159582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.058 [2024-07-12 16:03:19.159611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.058 qpair failed and we were unable to recover it. 00:26:22.058 [2024-07-12 16:03:19.159755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.159802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.159930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.159958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.160088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.160116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.160218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.160247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.160400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.160428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.160583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.160616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.160714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.160749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.160884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.160913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.161042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.161070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.161202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.161231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.161358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.161387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.161519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.161548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.161651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.161679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.161784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.161814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.161916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.161945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.162097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.162125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.162277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.162306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.162423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.162452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.162607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.162636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.162793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.162821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.162957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.162985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.163109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.163137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.163266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.163294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.163393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.163421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.163520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.163548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.163645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.163672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.163798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.163826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.163931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.163959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.164088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.164116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.164274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.164301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.164458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.164486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.164609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.164637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.164758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.164791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.164900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.164928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.165049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.165077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.165236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.165263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.165393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.165421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.165555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.165583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.165685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.165713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.165819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.165847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.059 [2024-07-12 16:03:19.165954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.059 [2024-07-12 16:03:19.165983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.059 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.166139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.166167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.166320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.166348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.166475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.166503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.166632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.166661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.166760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.166788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.166924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.166952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.167083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.167111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.167241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.167269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.167402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.167452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.167648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.167698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.167911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.167939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.168042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.168070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.168197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.168226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.168356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.168383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.168512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.168540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.168667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.168694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.168847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.168874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.168991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.169017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.169143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.169174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.169337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.169364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.169489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.169515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.169617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.169644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.169771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.169798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.169899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.169926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.170085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.170112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.170237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.170264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.170394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.170420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.170547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.170574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.170701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.170728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.170837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.170864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.170965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.170992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.171116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.171143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.171242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.171269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.171395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.171422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.060 [2024-07-12 16:03:19.171546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.060 [2024-07-12 16:03:19.171573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.060 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.171664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.171691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.171824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.171851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.171972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.171998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.172132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.172159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.172281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.172308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.172433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.172461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.172582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.172609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.172730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.172763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.172909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.172935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.173061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.173088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.173214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.173240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.173369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.173396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.173555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.173581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.173694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.173720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.173846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.173874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.173971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.173998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.174147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.174173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.174292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.174318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.174464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.174490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.174618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.174644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.174779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.174806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.174894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.174920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.175068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.175094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.175218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.175243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.175338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.175365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.175487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.175513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.175653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.175679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.175807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.175834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.175981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.176007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.176130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.176156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.176280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.176306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.176422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.176448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.176581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.176607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.176721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.176754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.176911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.176937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.177088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.177114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.177212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.177238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.177391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.177416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.177508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.177535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.177631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.177657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.061 [2024-07-12 16:03:19.177785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.061 [2024-07-12 16:03:19.177811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.061 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.177931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.177957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.178080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.178107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.178238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.178265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.178392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.178418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.178542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.178568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.178692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.178718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.178849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.178875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.179023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.179049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.179201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.179227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.179321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.179347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.179472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.179502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.179593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.179619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.179753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.179780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.179908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.179934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.180084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.180110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.180203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.180229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.180356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.180382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.180498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.180524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.180651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.180677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.180800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.180827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.180907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.180933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.181058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.181083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.181230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.181256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.181371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.181397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.181519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.181545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.181661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.181687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.181791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.181818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.181916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.181942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.182085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.182112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.182253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.182279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.182426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.182451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.182572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.182598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.182714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.182746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.182834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.182860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.182952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.182979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.183094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.183119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.183267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.183293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.183437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.183467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.183614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.183640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.183769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.183795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.183928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.062 [2024-07-12 16:03:19.183954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.062 qpair failed and we were unable to recover it. 00:26:22.062 [2024-07-12 16:03:19.184073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.184099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.184224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.184250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.184396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.184422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.184513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.184540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.184696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.184722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.184821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.184847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.184941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.184967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.185079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.185105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.185193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.185219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.185340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.185366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.185495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.185521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.185668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.185694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.185819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.185845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.185968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.185994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.186113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.186139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.186254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.186280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.186429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.186455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.186579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.186604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.186726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.186760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.186877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.186903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.186993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.187019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.187131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.187157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.187278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.187303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.187388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.187414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.187534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.187560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.187708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.187734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.187865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.187891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.188006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.188033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.188178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.188204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.188293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.188319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.188461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.188488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.188610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.188636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.188732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.188767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.188884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.188910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.188999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.189025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.189185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.189211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.189326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.189351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.189486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.189512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.189628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.189654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.189782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.189808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.189924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.063 [2024-07-12 16:03:19.189950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.063 qpair failed and we were unable to recover it. 00:26:22.063 [2024-07-12 16:03:19.190111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.190136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.190312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.190335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.190465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.190490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.190604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.190628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.190793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.190821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.190940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.190966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.191086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.191111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.191224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.191248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.191386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.191410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.191534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.191558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.191701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.191749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.191842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.191868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.191981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.192006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.192113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.192138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.192316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.192340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.192514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.192538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.192673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.192697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.192818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.192845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.192994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.193020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.193156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.193195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.193322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.193347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.193514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.193538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.193673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.193711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.193875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.193905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.194027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.194071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.194202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.194226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.194367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.194391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.194515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.194540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.194706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.194752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.194891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.194916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.195009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.195049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.195183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.195207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.195342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.195367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.195481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.064 [2024-07-12 16:03:19.195506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.064 qpair failed and we were unable to recover it. 00:26:22.064 [2024-07-12 16:03:19.195678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.195703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.195808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.195834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.195989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.196015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.196192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.196216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.196367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.196406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.196506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.196531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.196640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.196665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.196778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.196804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.196894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.196921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.197039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.197080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.197203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.197203] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:26:22.065 [2024-07-12 16:03:19.197242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.197278] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:22.065 [2024-07-12 16:03:19.197366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.197389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.197515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.197538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.197707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.197754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.197872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.197898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.198033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.198058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.198170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.198194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.198347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.198371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.198507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.198531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.198662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.198686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.198844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.198871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.198974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.199000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.199114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.199138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.199300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.199324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.199436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.199460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.199617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.199641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.199785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.199812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.199956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.199997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.200097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.200122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.200257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.200281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.200409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.200433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.200542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.200567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.200700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.200724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.200869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.200895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.200983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.201009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.201153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.201177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.201341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.201379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.201483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.201507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.201675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.201699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.201856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.065 [2024-07-12 16:03:19.201883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.065 qpair failed and we were unable to recover it. 00:26:22.065 [2024-07-12 16:03:19.201974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.202000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.202157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.202180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.202299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.202326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.202494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.202519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.202633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.202657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.202809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.202835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.202955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.202981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.203116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.203140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.203246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.203270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.203406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.203429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.203541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.203565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.203727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.203763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.203907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.203933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.204101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.204124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.204262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.204285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.204411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.204435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.204581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.204605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.204733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.204779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.204914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.204940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.205067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.205092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.205231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.205270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.205395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.205419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.205546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.205570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.205705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.205729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.205866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.205891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.206004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.206028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.206161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.206186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.206330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.206354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.206517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.206555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.206725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.206776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.206886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.206925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.207066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.207090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.207255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.207294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.207424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.207448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.207576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.207600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.207779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.207805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.207932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.207956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.208137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.208161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.208305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.208330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.208475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.208512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.066 qpair failed and we were unable to recover it. 00:26:22.066 [2024-07-12 16:03:19.208648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.066 [2024-07-12 16:03:19.208686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.208865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.208891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.209009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.209035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.209164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.209191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.209322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.209347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.209498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.209537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.209682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.209707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.209860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.209901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.210034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.210058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.210182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.210220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.210345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.210369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.210492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.210516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.210648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.210672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.210800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.210825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.210924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.210950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.211079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.211105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.211228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.211252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.211361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.211386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.211550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.211574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.211765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.211789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.211943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.211966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.212092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.212131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.212266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.212290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.212443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.212466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.212596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.212620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.212754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.212778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.212905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.212930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.213060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.213084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.213203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.213227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.213357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.213381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.213494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.213518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.213730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.213775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.213904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.213942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.214041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.214065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.214241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.214265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.214378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.214402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.214600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.214624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.214745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.214771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.214864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.214889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.215055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.215080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.215209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.067 [2024-07-12 16:03:19.215247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.067 qpair failed and we were unable to recover it. 00:26:22.067 [2024-07-12 16:03:19.215378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.215402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.215540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.215564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.215693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.215717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.215950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.215975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.216111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.216134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.216287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.216311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.216501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.216525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.216745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.216769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.216923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.216948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.217151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.217175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.217278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.217316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.217473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.217497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.217652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.217691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.217830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.217856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.218086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.218124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.218265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.218289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.218425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.218453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.218609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.218648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.218844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.218870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.219024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.219048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.219209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.219232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.219335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.219369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.219499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.219523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.219662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.219686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.219845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.219869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.219988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.220028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.220196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.220234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.220455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.220479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.220595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.220619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.220766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.220792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.221020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.221062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.221201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.221225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.221419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.221442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.221587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.221610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.221757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.221782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.221972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.221998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.222176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.222199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.222304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.222329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.222535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.222573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.068 qpair failed and we were unable to recover it. 00:26:22.068 [2024-07-12 16:03:19.222710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.068 [2024-07-12 16:03:19.222733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.222850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.222876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.223086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.223124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.223256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.223283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.223384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.223412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.223555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.223579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.223719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.223772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.223883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.223908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.224074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.224098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.224190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.224213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.224375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.224399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.224626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.224661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.224797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.224821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.225044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.225068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.225234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.225271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.225444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.225478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.225662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.225687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.225827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.225853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.226011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.226050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.226264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.226288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.226428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.226452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.226592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.226616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.226771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.226797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.226942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.226968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.227153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.227182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.227316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.227339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.227506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.227544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.227706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.227735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.227881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.227920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.228080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.228104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.228215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.228253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.228418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.228460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.228622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.228645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.228801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.228825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.229003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.229042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.229182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.229206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.229431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.229466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.229634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.069 [2024-07-12 16:03:19.229657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.069 qpair failed and we were unable to recover it. 00:26:22.069 [2024-07-12 16:03:19.229881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.229907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.230040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.230064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.230253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.230277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.230493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.230516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.230654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.230678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.230834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.230860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.231006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.231032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.231199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.231238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.231365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.231388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.231567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.231607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.231784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.231809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.231915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.231940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.232144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.232167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.232311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.232334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.232455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.232479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.232628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.232652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.232811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.232836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.233055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.233080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.233247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.233271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.233416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.233440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.233570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.233594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.233776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.233801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.233933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.233959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.234161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.234185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.234366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.234390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.234594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.234618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.234764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.234805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.234926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.234952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.235139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.235163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.235329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.235353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.235485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.235523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.235681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.235720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.235845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.235870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.236063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.236089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.236217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.236256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.236420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.236444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.236578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.236603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.236789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.236815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.236928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.236953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.237097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.237121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.070 [2024-07-12 16:03:19.237242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.070 [2024-07-12 16:03:19.237280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.070 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.237414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.237438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.237621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.237645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.237812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.237853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.237984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.238010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.238163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.238188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.238336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.238360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.238517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.238541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.238752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.238777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.238923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.238949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.239058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.239098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.239244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.239268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.239427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.239465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.239602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.239640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.239798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.239838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.239983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.240008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.240138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.240162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.240289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.240313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.240578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.240601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.240772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.240797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.240958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.240983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.241173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.241197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.241371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.241394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.241496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.241520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.241681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.241706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.241828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.241854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.241951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.241977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.242089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.242114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.242224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.242249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.242418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.242442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.242639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.242673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.242827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.242854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.242974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.243000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.243229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.243252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.243428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.243463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.243646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.243670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.243882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.243907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.244058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.244082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.244229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.244255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.244379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.244405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.071 [2024-07-12 16:03:19.244535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.071 [2024-07-12 16:03:19.244561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.071 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.244689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.244715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.244932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.244957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.245132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.245156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.245354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.245378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.245482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.245522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.245657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.245682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.245843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.245869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.246027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.246054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.246196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.246220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.246351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.246375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.246545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.246584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.246682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.246706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.246881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.246907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.247037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.247061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.247247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.247270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.247392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.247416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.247563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.247588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.247760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.247786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.247923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.247948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.248083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.248122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.248293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.248316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.248439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.248477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.248631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.248655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.248788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.248815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.248930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.248956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.249145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.249169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.249301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.249324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.249548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.249582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.249723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.249771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.249936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.249960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.250152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.250177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.250325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.250349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.250465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.250500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.250667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.250705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.250839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.250871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.251007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.251032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.251171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.251209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.251371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.251394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.251616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.251644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.251792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.251818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.072 qpair failed and we were unable to recover it. 00:26:22.072 [2024-07-12 16:03:19.251915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.072 [2024-07-12 16:03:19.251940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.252061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.252086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.252219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.252257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.252398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.252422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.252537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.252570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.252768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.252809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.252966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.252991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.253158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.253182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.253312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.253350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.253505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.253544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.253671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.253696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.253851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.253877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.254138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.254161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.254354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.254378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.254525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.254548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.254814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.254841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.254943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.254969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.255196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.255230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.255395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.255418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.255532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.255556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.255768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.255794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.255943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.255971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.256105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.256144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.256322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.256345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.256522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.256545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.073 qpair failed and we were unable to recover it. 00:26:22.073 [2024-07-12 16:03:19.256764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.073 [2024-07-12 16:03:19.256789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.256931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.256955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.257078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.257103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.257277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.257301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.257517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.257539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.257688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.257711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.257936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.257961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.258095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.258132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.258274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.258314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.258525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.258558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.258725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.258769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.258932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.258958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.259188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.259216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.259348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.259372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.259527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.259565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.259797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.259823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.259966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.259990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.260220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.260253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.260390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.260414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.260561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.260599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.260773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.260799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.260884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.260909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.261045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.261069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.261251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.261274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.261450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.261474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.261687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.261710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.261874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.261899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.262118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.262152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.262288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.262311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.262473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.262511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.262623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.262663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.262788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.262814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.262940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.262965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.263088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.263131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.263238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.263261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.263407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.263430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.263681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.263704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.263892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.263917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.264076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.074 [2024-07-12 16:03:19.264100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.074 qpair failed and we were unable to recover it. 00:26:22.074 [2024-07-12 16:03:19.264231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.264254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.264396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.264420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.264548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.264572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.264715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.264748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.264945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.264981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.265084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.265122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.265251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.265275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.265389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.265413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.265556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.265580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.265744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.265769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.265935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.265960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.266145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.266170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.266323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.266346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.266593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.266625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.266764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.266803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.267032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.267064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.267205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.267229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.267384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.267421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.267595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.267624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.267765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.267790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.267958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.267982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.268160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.268183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.268399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.268422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.268568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.268592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.268768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.268793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.268943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.268971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.269153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.269176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.269343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.269375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.269604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.269639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.269790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.269826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.270064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.270111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.270254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.270277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.270395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.270419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.270596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.270635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.270846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.270883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.271012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.271061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.271364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.271387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.271626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.271659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.271808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.271843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.272077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.272112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.075 [2024-07-12 16:03:19.272303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.075 [2024-07-12 16:03:19.272326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.075 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.272501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.272539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.272697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.272720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.272921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.272944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.273084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.273108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.273276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.273314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.273434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.273473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.273650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.273688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.273876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.273901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.274046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.274070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.274209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.274233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.274339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.274363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.274503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.274531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.274660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.274684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.274858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.274884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.275046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.275071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.275222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.275246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.275385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.275423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.275576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.275614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.275714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.275776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.275934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.275961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.276117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.276156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.276358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.276381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.276553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.276576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.276803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.276836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.277008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.277032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.277275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.277298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.277439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.277463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.277651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.277674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.277902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.277928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.278101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.278125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.278329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.278352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.278455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.278493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.278586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.076 [2024-07-12 16:03:19.278610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.076 qpair failed and we were unable to recover it. 00:26:22.076 [2024-07-12 16:03:19.278632] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:22.076 [2024-07-12 16:03:19.278757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.278798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.278966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.278991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.279130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.279178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.279388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.279411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.279563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.279587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.279770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.279799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.279897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.279921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.280055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.280079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.280270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.280293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.280478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.280501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.280675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.280698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.280878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.280902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.281084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.281108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.281265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.281289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.281517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.281547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.281684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.281707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.281859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.281900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.282098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.282130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.282249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.282273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.282437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.282462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.282636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.282660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.282820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.282859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.282982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.283008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.283139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.283163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.283289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.283313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.283486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.283510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.283710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.283764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.283902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.283942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.284090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.284130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.284308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.284332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.284519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.284542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.284708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.284732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.284971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.285010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.285207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.285231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.285376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.285400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.285611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.285634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.285887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.285912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.286084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.077 [2024-07-12 16:03:19.286107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.077 qpair failed and we were unable to recover it. 00:26:22.077 [2024-07-12 16:03:19.286273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.286297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.286455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.286490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.286663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.286687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.286852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.286878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.287051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.287076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.287272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.287295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.287445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.287468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.287613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.287652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.287810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.287835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.287997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.288022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.288243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.288267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.288385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.288423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.288583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.288621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.288850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.288880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.289019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.289043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.289240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.289273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.289420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.289459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.289631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.289654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.289782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.289807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.290015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.290039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.290204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.290239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.290376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.290422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.290659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.290692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.290865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.290891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.291048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.291072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.291193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.291217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.291377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.291401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.291572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.291622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.291812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.291857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.291991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.292015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.292215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.292239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.292418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.292442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.292681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.292730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.292932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.292967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.078 [2024-07-12 16:03:19.293153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.078 [2024-07-12 16:03:19.293177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.078 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.293311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.293335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.293527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.293550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.293718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.293761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.293880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.293906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.294069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.294107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.294268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.294292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.294408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.294458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.294642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.294665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.294808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.294834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.295042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.295090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.295191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.295214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.295375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.295399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.295619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.295642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.295788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.295828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.296004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.296043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.296208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.296241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.296391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.296429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.296636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.296660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.296781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.296805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.297008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.297032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.297183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.297222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.297355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.297379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.297526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.297551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.297799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.297835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.297959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.297984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.298170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.298193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.298392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.298423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.298599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.298627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.298770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.298811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.299055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.299079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.079 qpair failed and we were unable to recover it. 00:26:22.079 [2024-07-12 16:03:19.299287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.079 [2024-07-12 16:03:19.299320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.299419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.299457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.299636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.299674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.299822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.299853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.300038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.300063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.300236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.300259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.300421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.300446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.300616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.300639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.300819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.300844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.301016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.301054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.301219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.301252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.301440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.301464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.301665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.301688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.301816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.301841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.301961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.301987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.302139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.302163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.302345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.302369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.302522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.302547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.302776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.302827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.302981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.303006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.303114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.303138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.303337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.303361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.303463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.303487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.303666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.303690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.303876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.303904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.304028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.304052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.304240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.304277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.304424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.304448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.304635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.304659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.304780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.304805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.304942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.304966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.305078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.305102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.305240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.305264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.305389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.305413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.305529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.305553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.305704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.305745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.305858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.305894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.306039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.306064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.306214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.306253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.306419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.306443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.306627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.080 [2024-07-12 16:03:19.306650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.080 qpair failed and we were unable to recover it. 00:26:22.080 [2024-07-12 16:03:19.306763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.306788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.306945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.306970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.307194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.307218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.307326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.307350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.307502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.307526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.307716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.307760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.307917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.307958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.308118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.308156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.308287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.308327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.308515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.308538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.308667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.308709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.308822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.308847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.308958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.308982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.309091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.309115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.309253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.309276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.309394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.309418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.309629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.309653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.309798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.309823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.309912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.309941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.310072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.310096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.310231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.310279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.310401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.310439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.310571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.310595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.310762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.310789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.310898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.310923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.311060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.311084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.311240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.311264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.311446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.311481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.311653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.311676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.311769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.311798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.311912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.311935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.312036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.312059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.312144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.312166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.312282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.312305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.312388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.312411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.312494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.312518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.312605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.312628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.312820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.312846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.312967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.081 [2024-07-12 16:03:19.312991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.081 qpair failed and we were unable to recover it. 00:26:22.081 [2024-07-12 16:03:19.313264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.313298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.313450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.313474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.313605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.313629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.313716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.313761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.313885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.313910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.314025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.314050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.314201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.314226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.314463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.314488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.314596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.314620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.314782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.314818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.314940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.314966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.315178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.315212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.315347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.315372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.315515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.315539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.315706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.315751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.315958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.315998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.316190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.316217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.316372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.316409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.316634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.316670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.316818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.316844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.316981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.317007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.317262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.317297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.317417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.317442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.317705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.317752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.317891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.317918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.318024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.318049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.318240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.318299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.318502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.318553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.318693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.318735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.318856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.318882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.319033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.082 [2024-07-12 16:03:19.319076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.082 qpair failed and we were unable to recover it. 00:26:22.082 [2024-07-12 16:03:19.319255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.083 [2024-07-12 16:03:19.319290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.083 qpair failed and we were unable to recover it. 00:26:22.083 [2024-07-12 16:03:19.319393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.083 [2024-07-12 16:03:19.319417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.083 qpair failed and we were unable to recover it. 00:26:22.083 [2024-07-12 16:03:19.319651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.083 [2024-07-12 16:03:19.319684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.083 qpair failed and we were unable to recover it. 00:26:22.083 [2024-07-12 16:03:19.319832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.083 [2024-07-12 16:03:19.319859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.083 qpair failed and we were unable to recover it. 00:26:22.083 [2024-07-12 16:03:19.319992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.083 [2024-07-12 16:03:19.320045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.083 qpair failed and we were unable to recover it. 00:26:22.083 [2024-07-12 16:03:19.320273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.083 [2024-07-12 16:03:19.320307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.083 qpair failed and we were unable to recover it. 00:26:22.083 [2024-07-12 16:03:19.320467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.083 [2024-07-12 16:03:19.320499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.083 qpair failed and we were unable to recover it. 00:26:22.083 [2024-07-12 16:03:19.320676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.083 [2024-07-12 16:03:19.320732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.083 qpair failed and we were unable to recover it. 00:26:22.083 [2024-07-12 16:03:19.320935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.083 [2024-07-12 16:03:19.320970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.083 qpair failed and we were unable to recover it. 00:26:22.083 [2024-07-12 16:03:19.321134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.083 [2024-07-12 16:03:19.321193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.083 qpair failed and we were unable to recover it. 00:26:22.083 [2024-07-12 16:03:19.321384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.083 [2024-07-12 16:03:19.321424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.083 qpair failed and we were unable to recover it. 00:26:22.083 [2024-07-12 16:03:19.321582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.321631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.321760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.321796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.321985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.322034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.322188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.322222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.322381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.322423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.322566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.322612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.322817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.322845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.322971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.322998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.323131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.323158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.323325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.323352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.323470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.323501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.323603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.323630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.323749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.323775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.323909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.323936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.324054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.324079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.324194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.324220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.324315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.324341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.324457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.324483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.324607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.324637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.324781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.324806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.324961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.324988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.325133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.325158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.325360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.325399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.325559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.325598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.325760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.325785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.325891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.325923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.326047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.371 [2024-07-12 16:03:19.326072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.371 qpair failed and we were unable to recover it. 00:26:22.371 [2024-07-12 16:03:19.326211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.326251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.326476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.326508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.326643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.326667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.326869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.326895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.326998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.327023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.327201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.327242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.327370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.327394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.327561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.327585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.327725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.327760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.327886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.327912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.328054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.328078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.328163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.328186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.328317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.328342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.328478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.328503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.328668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.328707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.328848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.328874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.328997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.329023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.329144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.329182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.329312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.329337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.329523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.329562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.329712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.329760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.329884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.329910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.329997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.330033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.330205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.330247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.330349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.330387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.330547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.330573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.330676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.330701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.330834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.330861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.330991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.331017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.331169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.331192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.331304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.331329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.331501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.331525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.331685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.331710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.331820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.331846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.331933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.331959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.332072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.332101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.332236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.372 [2024-07-12 16:03:19.332275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.372 qpair failed and we were unable to recover it. 00:26:22.372 [2024-07-12 16:03:19.332417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.332456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.332559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.332583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.332729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.332785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.332910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.332936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.333075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.333100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.333211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.333236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.333353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.333377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.333504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.333529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.333647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.333673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.333835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.333861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.333979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.334004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.334144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.334183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.334290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.334314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.334483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.334508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.334630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.334668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.334786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.334812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.334929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.334956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.335063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.335087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.335215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.335239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.335384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.335423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.335569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.335593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.335678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.335702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.335818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.335843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.335964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.335991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.336108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.336132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.336254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.336279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.336468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.336497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.336611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.336649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.336825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.336852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.336949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.336974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.337068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.337092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.337228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.337252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.337381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.337404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.337525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.337550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.337667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.337691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.337832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.337859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.337954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.337979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.338091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.338132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.373 qpair failed and we were unable to recover it. 00:26:22.373 [2024-07-12 16:03:19.338290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.373 [2024-07-12 16:03:19.338329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.338422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.338446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.338572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.338597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.338777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.338804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.339031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.339056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.339193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.339232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.339358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.339383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.339501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.339526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.339640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.339664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.339782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.339808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.339898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.339924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.340014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.340054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.340188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.340213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.340341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.340366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.340499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.340534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.340752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.340779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.340881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.340907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.341002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.341028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.341132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.341157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.341293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.341317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.341473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.341513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.341642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.341683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.341817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.341845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.341940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.341966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.342088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.342113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.342224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.342250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.342463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.342489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.342653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.342679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.342765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.342796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.342900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.342925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.343024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.343050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.343141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.343174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.343331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.343356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.343495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.343535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.343658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.343683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.343899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.343925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.344065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.344090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.344207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.344248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.374 [2024-07-12 16:03:19.344372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.374 [2024-07-12 16:03:19.344398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.374 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.344516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.344541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.344644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.344670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.344777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.344804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.344914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.344941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.345075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.345100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.345226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.345266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.345480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.345505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.345637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.345662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.345790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.345816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.345962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.345988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.346110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.346136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.346280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.346306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.346493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.346517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.346625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.346651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.346804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.346830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.346935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.346961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.347094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.347119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.347352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.347378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.347474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.347499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.347598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.347625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.347713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.347744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.347961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.347986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.348193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.348218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.348355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.348381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.348504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.348537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.348655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.348681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.348797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.348823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.348922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.375 [2024-07-12 16:03:19.348947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.375 qpair failed and we were unable to recover it. 00:26:22.375 [2024-07-12 16:03:19.349111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.349136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.349255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.349285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.349442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.349467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.349687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.349712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.349834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.349859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.349977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.350002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.350115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.350139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.350258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.350283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.350431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.350455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.350577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.350603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.350752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.350778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.350899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.350924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.351078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.351103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.351265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.351290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.351428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.351467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.351598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.351623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.351785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.351811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.351909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.351934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.352058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.352083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.352241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.352281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.352403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.352428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.352564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.352589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.352688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.352713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.352824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.352850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.352946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.352971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.353072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.353097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.353217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.353243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.353359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.353384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.353477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.353502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.353633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.353659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.376 qpair failed and we were unable to recover it. 00:26:22.376 [2024-07-12 16:03:19.353784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.376 [2024-07-12 16:03:19.353811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.353907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.353932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.354061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.354102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.354253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.354277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.354411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.354435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.354564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.354590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.354759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.354785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.354883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.354907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.355006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.355045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.355185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.355224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.355346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.355373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.355535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.355582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.355745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.355775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.355874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.355901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.355997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.356023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.356137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.356162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.356279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.356305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.356439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.356465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.356616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.356656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.356757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.356784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.356904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.356930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.357025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.357065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.357180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.357205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.357370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.357409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.357535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.357560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.357725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.357756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.357845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.357871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.357987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.358012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.358116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.358141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.358300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.358326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.358474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.358516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.359278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.359306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.359442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.359467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.359604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.359631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.359807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.359834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.359922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.359949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.360038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.360064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.377 qpair failed and we were unable to recover it. 00:26:22.377 [2024-07-12 16:03:19.360172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.377 [2024-07-12 16:03:19.360197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.360347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.360373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.360552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.360577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.360678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.360704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.360825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.360852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.360951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.360978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.361084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.361109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.361254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.361279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.361371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.361411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.361500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.361526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.361669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.361695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.361828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.361854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.361982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.362008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.362141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.362182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.362324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.362369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.362483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.362523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.362688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.362715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.362806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.362833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.362934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.362960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.363046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.363072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.363151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.363181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.363270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.363296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.363416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.363441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.363555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.363580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.363687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.363713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.363835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.363862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.363959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.363985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.364103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.364129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.364253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.364278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.364383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.364408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.364554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.364578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.364699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.364746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.364855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.364881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.364976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.365002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.365153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.365195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.365313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.365339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.365428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.365454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.365569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.365595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.365728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.378 [2024-07-12 16:03:19.365776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.378 qpair failed and we were unable to recover it. 00:26:22.378 [2024-07-12 16:03:19.365877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.365904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.366354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.366383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.366525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.366552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.366709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.366735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.366845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.366872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.366991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.367017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.367131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.367157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.367269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.367294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.367440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.367466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.367558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.367584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.367706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.367732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.367841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.367867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.367966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.367992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.368109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.368135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.368268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.368293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.368433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.368461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.368579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.368604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.368718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.368751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.368858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.368885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.369011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.369037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.369149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.369176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.369291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.369317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.369403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.369430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.369522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.369547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.369691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.369716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.369849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.369875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.369963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.369989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.370136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.370162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.370263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.370289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.370428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.370454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.370537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.370563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.370686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.370712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.370845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.370872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.370995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.371021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.371134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.371160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.371277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.371303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.371415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.371441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.371560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.371586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.371679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.379 [2024-07-12 16:03:19.371705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.379 qpair failed and we were unable to recover it. 00:26:22.379 [2024-07-12 16:03:19.371810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.371837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.371952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.371978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.372066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.372092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.372210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.372237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.372360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.372386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.372508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.372534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.372628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.372654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.372780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.372806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.372897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.372923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.373038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.373063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.373212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.373251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.373385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.373425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.373558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.373585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.373724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.373755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.373867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.373893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.373981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.374007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.374096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.374126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.374238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.374264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.374409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.374435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.374552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.374578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.374667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.374692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.374821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.374847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.374970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.374996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.375147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.375173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.375298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.375324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.375448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.375475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.375629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.375654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.375771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.375798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.375887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.375913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.376003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.376029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.376147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.376173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.380 qpair failed and we were unable to recover it. 00:26:22.380 [2024-07-12 16:03:19.376309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.380 [2024-07-12 16:03:19.376334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.376458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.376483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.376607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.376632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.376768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.376795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.376891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.376917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.377061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.377087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.377229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.377254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.377351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.377377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.377474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.377508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.377637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.377663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.377786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.377813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.377906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.377932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.378061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.378087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.378178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.378204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.378324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.378351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.378477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.378503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.378587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.378613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.378697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.378723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.378834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.378862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.378954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.378980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.379068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.379094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.379176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.379203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.379346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.379373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.379493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.379521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.379647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.379674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.379772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.379803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.379926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.379952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.380096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.380122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.380203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.380229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.380352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.380378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.380502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.380527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.380639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.380665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.380772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.380799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.380891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.380916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.381015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.381041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.381127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.381153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.381266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.381292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.381440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.381467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.381597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.381638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.381 qpair failed and we were unable to recover it. 00:26:22.381 [2024-07-12 16:03:19.381775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.381 [2024-07-12 16:03:19.381819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.381908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.381934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.382023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.382054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.382179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.382205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.382325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.382351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.382476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.382502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.382588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.382614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.382708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.382734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.382872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.382898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.382990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.383016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.383100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.383125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.383225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.383251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.383349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.383375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.383493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.383519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.383631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.383656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.383780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.383807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.383919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.383945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.384039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.384065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.384157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.384183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.384299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.384324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.384407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.384433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.384511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.384537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.384660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.384686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.384784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.384811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.385502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.385531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.385655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.385681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.385812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.385843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.385942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.385968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.386064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.386106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.386227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.386252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.386382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.386408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.386554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.386579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.386695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.386720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.386842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.386869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.386985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.387011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.387109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.387133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.387275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.387301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.387485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.387509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.387611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.387651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.387770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.387796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.382 [2024-07-12 16:03:19.387890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.382 [2024-07-12 16:03:19.387916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.382 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.388000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.388026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.388149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.388176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.388293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.388319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.388410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.388436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.388557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.388583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.388675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.388700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.388805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.388831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.388930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.388956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.389111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.389151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.389252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.389277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.389412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.389437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.389567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.389593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.389706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.389729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.389862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.389889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.389976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.390001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.390134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.390159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.390280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.390305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.390419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.390445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.390591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.390616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.390786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.390813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.390905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.390931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.391020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.391046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.391139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.391164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.391278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.391304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.391420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.391447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.391526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.391556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.391676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.391702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.391798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.391825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.391924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.391949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.392066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.392092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.392214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.392240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.392386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.392412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.392573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.392599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.392728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.392759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.392877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.392903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.393000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.393040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.393157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.393182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.393322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.393347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.394057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.394086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.394239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.383 [2024-07-12 16:03:19.394265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.383 qpair failed and we were unable to recover it. 00:26:22.383 [2024-07-12 16:03:19.394401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.394443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.394598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.394638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.394776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.394804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.394901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.394928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.395020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.395046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.395133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.395159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.395298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.395326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.395419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.395453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.395545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.395571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.395695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.395722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.395821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.395847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.395938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.395964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.396110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.396140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.396248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.396275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.396396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.396422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.396573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.396599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.396777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.396805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.396908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.396935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.397043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.397069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.397198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.397224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.397346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.397373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.397536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.397577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.397692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.397719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.397824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.397851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.397947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.397974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.398077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.398103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.398227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.398254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.398406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.398432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.398531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.398557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.398686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.398713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.398844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.398871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.398977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.399004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.399094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.399128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.399282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.399308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.399423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.399450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.399535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.399561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.399707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.399733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.399836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.399863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.399982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.400008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.384 [2024-07-12 16:03:19.400216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.384 [2024-07-12 16:03:19.400242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.384 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.400360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.400386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.400506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.400531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.400659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.400686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.400804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.400830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.400949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.400976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.401094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.401121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.401233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.401259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.401374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.401400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.401495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.401521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.401671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.401697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.401800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.401827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.401919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.401945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.402066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.402096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.402189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.402226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.402417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.402444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.402601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.402627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.402752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.402779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.402876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.402902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.403008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.403034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.403179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.403206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.403333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.403359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.403477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.403503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.403593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.403620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.403746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.403772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.403867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.403894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.403988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.404014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.404142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.404169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.404259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.404285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.404437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.404464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.404568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.404594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.404689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.385 [2024-07-12 16:03:19.404715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.385 qpair failed and we were unable to recover it. 00:26:22.385 [2024-07-12 16:03:19.404847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.404890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.404986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.405013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.405136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.405162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.405285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.405311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.405463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.405489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.405667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.405692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.405802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.405828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.405926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.405952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec54000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.406073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.406101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.406191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.406218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.406312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.406338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.406484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.406510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.406630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.406656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.406752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.406779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.406874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.406901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.406993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.407020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.407133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.407158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.407271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.407297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.407387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.407413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.407536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.407562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.407649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.407676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.407796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.407826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.407924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.407950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.408076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.408102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.408285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.408311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.408460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.408487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.408606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.408632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.408750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.408777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.408865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.408891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.408988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.409015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.409129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.409155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.409243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.409269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.409388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.409414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.409531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.409557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.409655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.409680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.409784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.409811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.409933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.409959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.410105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.410131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.410249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.410275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.410425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.410452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.386 [2024-07-12 16:03:19.410566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.386 [2024-07-12 16:03:19.410593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.386 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.410686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.410713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.410817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.410844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.410935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.410961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.411080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.411106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.411255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.411281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.411398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.411424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.411527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.411554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.411681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.411707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.411812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.411839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.411939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.411965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.412086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.412112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.412206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.412232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.412391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.412417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.412510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.412536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.412685] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:22.387 [2024-07-12 16:03:19.412718] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:22.387 [2024-07-12 16:03:19.412725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.412734] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:22.387 [2024-07-12 16:03:19.412767] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:22.387 [2024-07-12 16:03:19.412772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 [2024-07-12 16:03:19.412780] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.412870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.412896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.413009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.413035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.413152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.413179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b9[2024-07-12 16:03:19.413136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:26:22.387 0 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.413197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:26:22.387 [2024-07-12 16:03:19.413278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.413303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.413416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.413444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.413440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:26:22.387 [2024-07-12 16:03:19.413444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:22.387 [2024-07-12 16:03:19.413592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.413617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.413707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.413734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.413837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.413863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.413950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.413976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.414076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.414103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.414186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.414211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.414324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.414350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.414448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.414474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.414571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.414597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.414713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.414746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.414849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.414879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.414974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.414999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.415094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.415120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.415213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.415239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.415360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.415386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.387 [2024-07-12 16:03:19.415501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.387 [2024-07-12 16:03:19.415526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.387 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.415623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.415649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.415777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.415804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.415895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.415923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.416004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.416029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.416126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.416150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.416272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.416298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.416417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.416443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.416570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.416596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.416718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.416750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.416852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.416878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.416961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.416987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.417080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.417106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.417195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.417220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.417316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.417342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.417477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.417503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.417624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.417650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.417729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.417761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.417861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.417886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.417972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.418001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.418181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.418208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.418304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.418330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.418430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.418457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.418577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.418603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.418716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.418747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.418851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.418877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.418971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.418997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.419090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.419118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.419197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.419222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.419319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.419344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.419439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.419465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.419560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.419586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.419686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.419711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.419845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.419871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.419964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.419991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.420092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.420122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.420237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.420263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.420349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.420374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.420471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.420497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.420591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.388 [2024-07-12 16:03:19.420617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.388 qpair failed and we were unable to recover it. 00:26:22.388 [2024-07-12 16:03:19.420734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.420808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.420903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.420929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.421026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.421052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.421178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.421206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.421324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.421351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.421466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.421493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.421611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.421637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.421764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.421791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.421879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.421904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.422009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.422035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.422129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.422158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.422305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.422331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.422427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.422452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.422571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.422597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.422684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.422710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.422830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.422857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.422973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.422999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.423121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.423147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.423238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.423264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.423389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.423414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.423529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.423554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.423672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.423698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.423806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.423833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.423933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.423959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.424113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.424139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.424242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.424268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.424349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.424376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.424493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.424519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.424610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.424636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.424728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.424759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.424856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.424882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.424971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.424996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.425116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.425142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.425264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.425291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.425407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.389 [2024-07-12 16:03:19.425433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.389 qpair failed and we were unable to recover it. 00:26:22.389 [2024-07-12 16:03:19.425521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.425551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.425634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.425659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.425783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.425809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.425911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.425937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.426069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.426095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.426216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.426242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.426355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.426380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.426506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.426532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.426634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.426659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.426790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.426817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.426913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.426939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.427037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.427067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.427184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.427210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.427307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.427333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.427457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.427482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.427600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.427627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.427728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.427760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.427859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.427883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.427978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.428003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.428084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.428109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.428225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.428250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.428340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.428369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.428489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.428514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.428608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.428633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.428784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.428811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.428930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.428956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.429080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.429105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.429203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.429229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.429314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.429338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.429453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.429479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.429575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.429600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.429710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.429736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.429874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.429901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.429995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.430021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.430101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.430126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.430219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.430245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.430343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.430368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.430514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.430540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.430624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.390 [2024-07-12 16:03:19.430650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.390 qpair failed and we were unable to recover it. 00:26:22.390 [2024-07-12 16:03:19.430770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.430797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.430919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.430950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.431080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.431106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.431251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.431277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.431427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.431453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.431550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.431575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.431685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.431711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.431800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.431827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.431911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.431936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.432057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.432090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.432211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.432237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.432349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.432375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.432499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.432525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.432648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.432674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.432777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.432803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.432900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.432926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.433063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.433089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.433209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.433235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.433333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.433360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.433492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.433518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.433662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.433688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.433787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.433812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.433905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.433931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.434055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.434081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.434202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.434228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.434351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.434377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.434504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.434530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.434677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.434703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.434805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.434832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.434931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.434957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.435044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.435070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.435164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.435194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.435337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.435363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.435454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.435480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.435601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.435627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.435727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.435760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.435853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.435879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.435977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.436003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.436127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.436153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.436280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.436306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.391 qpair failed and we were unable to recover it. 00:26:22.391 [2024-07-12 16:03:19.436444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.391 [2024-07-12 16:03:19.436470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.436562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.436587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.436704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.436730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.436829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.436854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.436946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.436972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.437065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.437091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.437213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.437238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.437357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.437383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.437511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.437537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.437683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.437708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.437848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.437874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.437969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.437995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.438109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.438136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.438284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.438310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.438393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.438418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.438563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.438589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.438676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.438701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.438804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.438831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.438938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.438963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.439072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.439098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.439200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.439225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.439353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.439379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.439502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.439528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.439619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.439643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.439751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.439777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.439874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.439900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.439991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.440015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.440102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.440127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.440248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.440278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.440396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.440421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.440517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.440543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.440662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.440687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.440797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.440824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.440919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.440945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.441067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.441092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.441216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.441242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.441326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.441351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.441467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.441493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.441621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.441647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.441796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.441823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.441913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.392 [2024-07-12 16:03:19.441939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.392 qpair failed and we were unable to recover it. 00:26:22.392 [2024-07-12 16:03:19.442029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.442055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.442174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.442201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.442289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.442315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.442461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.442487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.442599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.442625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.442767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.442793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.442879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.442904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.442999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.443025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.443144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.443170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.443284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.443310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.443436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.443462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.443581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.443607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.443699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.443724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.443823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.443849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.443945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.443971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.444057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.444081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.444212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.444238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.444383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.444409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.444531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.444557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.444682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.444707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.444815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.444841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.444934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.444960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.445049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.445075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.445227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.445253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.445343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.445369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.445453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.445478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.445562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.445587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.445779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.445810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.445913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.445939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.446056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.446082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.446208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.446234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.446332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.446359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.446445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.446471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.446558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.446584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.446729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.446763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.393 [2024-07-12 16:03:19.446861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.393 [2024-07-12 16:03:19.446887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.393 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.446978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.447003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.447130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.447156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.447308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.447334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.447483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.447509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.447633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.447659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.447759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.447786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.447886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.447913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.448014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.448039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.448152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.448178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.448270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.448296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.448416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.448441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.448561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.448587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.448709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.448734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.448841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.448867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.448953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.448978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.449095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.449121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.449242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.449268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.449383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.449409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.449534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.449560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.449679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.449705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.449804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.449830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.449925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.449951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.450072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.450097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.450239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.450264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.450378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.450404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.450526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.450552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.450671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.450696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.450790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.450816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.450914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.450940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.451032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.451057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.451180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.451206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.451331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.451362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.451477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.451503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.451621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.451647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.451768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.451795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.451883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.451907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.451990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.452015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.452138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.452164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.452281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.452306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.452401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.394 [2024-07-12 16:03:19.452426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.394 qpair failed and we were unable to recover it. 00:26:22.394 [2024-07-12 16:03:19.452573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.452598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.452747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.452774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.452866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.452891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.453007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.453032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.453121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.453151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.453316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.453342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.453468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.453494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.453644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.453670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.453785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.453812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.453904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.453930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.454009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.454036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.454130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.454156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.454270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.454296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.454411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.454437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.454557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.454583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.454705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.454731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.454824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.454849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.454944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.454970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.455118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.455144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.455260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.455286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.455406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.455432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.455552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.455578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.455691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.455717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.455817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.455842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.455933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.455959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.456095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.456121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.456200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.456225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.456316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.456341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.456453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.456479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.456634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.456660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.456790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.456816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.456912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.456943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.457027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.457053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.457197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.457223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.457368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.457393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.457503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.457528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.457644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.457670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.457769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.457795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.457881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.457905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.457998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.458024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.458167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.395 [2024-07-12 16:03:19.458193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.395 qpair failed and we were unable to recover it. 00:26:22.395 [2024-07-12 16:03:19.458379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.458405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.458522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.458559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.458660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.458686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.458805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.458832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.458932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.458958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.459072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.459098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.459250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.459276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.459453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.459479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.459634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.459659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.459846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.459872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.459967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.459993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.460086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.460111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.460224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.460250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.460369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.460395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.460559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.460585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.460705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.460731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.460872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.460899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.460995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.461020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.461135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.461160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.461250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.461283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.461429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.461455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.461573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.461598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.461748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.461775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.461889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.461915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.462061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.462087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.462223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.462249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.462385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.462411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.462579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.462605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.462787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.462813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.462909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.462935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.463038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.463068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.463219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.463245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.463421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.463447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.463605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.463631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.463721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.463760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.463860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.463886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.463965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.463990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.464160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.464186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.464388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.464414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.464582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.464608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.396 [2024-07-12 16:03:19.464750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.396 [2024-07-12 16:03:19.464776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.396 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.464861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.464886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.465008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.465034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.465154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.465180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.465392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.465418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.465569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.465595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.465735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.465797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.465901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.465927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.466018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.466046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.466135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.466159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.466375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.466401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.466553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.466582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.466736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.466770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.466857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.466882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.467026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.467051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.467222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.467247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.467345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.467371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.467570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.467596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.467763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.467790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.467880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.467905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.468030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.468056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.468173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.468199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.468287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.468312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.468400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.468426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.468546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.468573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.468729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.468761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.468909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.468935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.469053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.469079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.469209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.469236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.469369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.469395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.469577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.469606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.469727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.469759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.469883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.469909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.470004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.470030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.470173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.470199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.470346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.470372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.470530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.470556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.470664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.470690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.470809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.470836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.470952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.470977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.471123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.471148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.397 qpair failed and we were unable to recover it. 00:26:22.397 [2024-07-12 16:03:19.471261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.397 [2024-07-12 16:03:19.471287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.471440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.471466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.471594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.471620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.471720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.471752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.471872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.471898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.471977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.472002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.472215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.472246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.472349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.472377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.472534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.472559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.472719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.472771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.472923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.472950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.473142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.473172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.473323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.473349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.473458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.473484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.473571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.473595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.473702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.473728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.473865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.473891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.473977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.474001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.474091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.474117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.474262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.474288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.474389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.474415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.474519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.474545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.474665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.474690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.474802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.474828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.474918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.474947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.475096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.475121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.475285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.475311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.475518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.475543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.475710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.475743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.475890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.475920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.476016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.476042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.476191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.476217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.476385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.476411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.476559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.476585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.476700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.476726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.398 [2024-07-12 16:03:19.476832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.398 [2024-07-12 16:03:19.476859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.398 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.476978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.477004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.477157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.477183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.477358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.477384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.477514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.477539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.477671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.477697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.477828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.477854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.477966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.477991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.478098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.478123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.478254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.478280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.478486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.478512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.478624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.478650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.478733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.478773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.478856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.478882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.478999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.479025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.479129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.479155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.479278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.479309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.479429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.479455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.479575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.479601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.479755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.479782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.479896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.479921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fec4c000b90 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.480105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.480149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.480313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.480339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.480503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.480528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.480652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.480678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.480813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.480839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.481046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.481070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.481168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.481195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.481389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.481414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.481542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.481576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.481732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.481764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.481862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.481888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.481980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.482016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.482191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.482216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.482343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.482368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.482536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.482561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.482750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.482775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.482867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.482891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.483048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.483073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.483240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.483274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.483429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.399 [2024-07-12 16:03:19.483454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.399 qpair failed and we were unable to recover it. 00:26:22.399 [2024-07-12 16:03:19.483622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.483647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.483837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.483863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.484018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.484043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.484220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.484246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.484340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.484365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.484490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.484515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.484611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.484636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.484756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.484781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.484990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.485015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.485154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.485179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.485279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.485304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.485389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.485413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.485605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.485629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.485746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.485789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.485941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.485966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.486132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.486166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.486254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.486279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.486405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.486430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.486648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.486672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.486836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.486862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.487052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.487077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.487215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.487240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.487408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.487433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.487596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.487621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.487803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.487828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.487945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.487970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.488094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.488124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.488264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.488289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.488418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.488443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.488592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.488616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.488753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.488779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.488909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.488934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.489121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.489146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.489295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.489320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.489429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.489458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.489677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.489709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.489845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.489871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.490024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.490049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.490193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.490218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.490363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.490388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.490506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.400 [2024-07-12 16:03:19.490530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.400 qpair failed and we were unable to recover it. 00:26:22.400 [2024-07-12 16:03:19.490707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.490732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.490896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.490921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.491130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.491163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.491294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.491318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.491467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.491491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.491660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.491685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.491792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.491818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.491965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.491990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.492144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.492168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.492260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.492284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.492413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.492437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.492578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.492603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.492709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.492734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.492871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.492896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.493007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.493032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.493186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.493211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.493324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.493348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.493483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.493508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.493629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.493654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.493775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.493800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.493934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.493963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.494126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.494151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.494273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.494298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.494384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.494408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.494518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.494543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.494697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.494722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.494918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.494953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.495080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.495105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.495260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.495284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.495453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.495478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.495634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.495658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.495832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.495862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.496013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.496046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.496197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.496221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.496396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.496422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.496515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.496540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.496623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.496646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.496761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.496786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.496937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.496962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.497087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.497112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.401 [2024-07-12 16:03:19.497238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.401 [2024-07-12 16:03:19.497263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.401 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.497406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.497430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.497520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.497545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.497690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.497715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.497845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.497870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.498016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.498041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.498226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.498254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.498372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.498397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.498534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.498563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.498805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.498835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.498921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.498946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.499076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.499101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.499198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.499223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.499342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.499366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.499502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.499527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.499612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.499636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.499722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.499752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.499913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.499938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.500082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.500107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.500213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.500238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.500350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.500375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.500547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.500577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.500692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.500717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.500847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.500873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.500983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.501007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.501124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.501149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.501251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.501274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.501385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.501410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.501600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.501625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.501786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.501812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.502020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.502045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.502173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.502198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.502400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.502425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.502522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.502547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.402 qpair failed and we were unable to recover it. 00:26:22.402 [2024-07-12 16:03:19.502667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.402 [2024-07-12 16:03:19.502692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.502874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.502900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.503051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.503076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.503255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.503280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.503387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.503411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.503552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.503577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.503723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.503754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.503863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.503888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.504000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.504025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.504245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.504273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.504409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.504434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.504518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.504541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.504671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.504695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.504845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.504871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.504991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.505020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.505134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.505158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.505244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.505269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.505393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.505418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.505533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.505557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.505678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.505703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.505826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.505852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.506062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.506092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.506243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.506268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.506377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.506402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.506528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.506553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.506669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.506694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.506779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.506804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.506891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.506916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.507037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.507062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.507190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.507214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.507351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.507381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.507506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.507531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.507679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.507703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.507820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.507852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.508025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.508050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.508198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.508223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.508354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.508378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.508499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.508524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.508743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.508769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.508867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.508891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.509029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.403 [2024-07-12 16:03:19.509054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.403 qpair failed and we were unable to recover it. 00:26:22.403 [2024-07-12 16:03:19.509173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.509201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.509302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.509327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.509462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.509486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.509655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.509680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.509827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.509857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.509986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.510011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.510126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.510151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.510271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.510296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.510488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.510513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.510733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.510762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.510912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.510937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.511053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.511078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.511196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.511221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.511340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.511364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.511482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.511507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.511588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.511612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.511761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.511786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.511969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.511994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.512165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.512190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.512291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.512321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.512476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.512500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.512647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.512671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.512799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.512824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.512947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.512972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.513114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.513138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.513236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.513261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.513372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.513396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.513514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.513542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.513660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.513684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.513897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.513923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.514089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.514113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.514282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.514307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.514471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.514501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.514662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.514687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.514896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.514921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.515028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.515052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.515153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.515178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.515290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.515314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.515467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.515492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.515650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.404 [2024-07-12 16:03:19.515674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.404 qpair failed and we were unable to recover it. 00:26:22.404 [2024-07-12 16:03:19.515876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.515901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.516044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.516069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.516229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.516254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.516397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.516422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.516594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.516619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.516799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.516824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.516958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.516982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.517192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.517222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.517346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.517377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.517570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.517595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.517713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.517746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.517869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.517893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.517980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.518004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.518147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.518171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.518285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.518310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.518482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.518507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.518690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.518715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.518824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.518849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.518984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.519009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.519128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.519152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.519269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.519294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.519408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.519432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.519550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.519575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.519746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.519772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.519856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.519882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.519969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.519994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.520111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.520136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.520267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.520292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.520501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.520526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.520626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.520652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.520817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.520842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.521022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.521047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.521164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.521189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.521329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.521354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.521470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.521494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.521647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.521672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.521850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.521875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.522087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.522112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.522258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.522283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.405 qpair failed and we were unable to recover it. 00:26:22.405 [2024-07-12 16:03:19.522366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.405 [2024-07-12 16:03:19.522390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.522477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.522500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.522619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.522644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.522761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.522787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.522947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.522971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.523145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.523169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.523371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.523396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.523485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.523509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.523620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.523645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.523768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.523794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.523955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.523980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.524186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.524210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.524355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.524380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.524494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.524519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.524665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.524689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.524916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.524942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.525093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.525122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.525296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.525320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.525415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.525439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.525591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.525616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.525724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.525754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.525839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.525865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.525986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.526011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.526151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.526176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.526353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.526378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.526545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.526574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.526732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.526762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.526893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.526918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.527004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.527027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.527180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.527205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.527386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.527416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.527537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.527562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.527721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.527757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.527919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.527945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.528105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.406 [2024-07-12 16:03:19.528130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.406 qpair failed and we were unable to recover it. 00:26:22.406 [2024-07-12 16:03:19.528351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.528376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.528466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.528491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.528581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.528604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.528724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.528758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.528895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.528920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.529163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.529191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.529311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.529336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.529451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.529475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.529586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.529615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.529835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.529867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.529960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.529985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.530073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.530099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.530228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.530253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.530438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.530463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.530547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.530571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.530728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.530762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.530937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.530971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.531090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.531115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.531320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.531345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.531476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.531501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.531653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.531678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.531816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.531841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.531959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.531984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.532112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.532137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.532267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.532292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.532412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.532437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.532529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.532554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.532677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.532707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.532862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.532887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.532994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.533023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.533115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.533140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.533282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.533307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.533430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.533455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.533571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.533596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.533683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.533718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.534047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.534072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.534187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.534212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.407 [2024-07-12 16:03:19.534326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.407 [2024-07-12 16:03:19.534351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.407 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.534501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.534526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.534705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.534730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.534893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.534918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.535088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.535113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.535309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.535334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.535449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.535474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.535608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.535633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.535763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.535789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.535939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.535964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.536128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.536152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.536256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.536286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.536432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.536457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.536611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.536636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.536831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.536856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.536978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.537002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.537110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.537135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.537223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.537257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.537382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.537406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.537606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.537630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.537779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.537805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.537912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.537937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.538022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.538047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.538166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.538191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.538279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.538304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.538420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.538445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.538572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.538597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.538697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.538722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.538849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.538874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.538967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.538991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.539143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.539167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.539320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.539345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.539524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.539548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.539702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.539727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.539829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.539854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.540010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.540034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.540235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.540260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.540415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.540440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.540581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.540605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.540780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.540820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.408 [2024-07-12 16:03:19.540973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.408 [2024-07-12 16:03:19.541002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.408 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.541157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.541181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.541299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.541324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.541520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.541544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.541672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.541708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.541838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.541863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.542054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.542078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.542242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.542266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.542471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.542496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.542619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.542644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.542751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.542777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.542922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.542947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.543126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.543161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.543359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.543384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.543476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.543500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.543647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.543672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.543771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.543798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.543930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.543955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.544125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.544150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.544281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.544305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.544413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.544438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.544630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.544655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.409 [2024-07-12 16:03:19.544802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.544828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:26:22.409 [2024-07-12 16:03:19.544918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.544943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:22.409 [2024-07-12 16:03:19.545074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.545099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:22.409 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.409 [2024-07-12 16:03:19.545300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.545326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.545459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.545483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.545699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.545724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.545833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.545859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.545982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.546007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.546162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.546187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.546299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.546324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.546454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.546479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.546598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.546623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.409 qpair failed and we were unable to recover it. 00:26:22.409 [2024-07-12 16:03:19.546769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.409 [2024-07-12 16:03:19.546795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.546981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.547015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.547159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.547184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.547301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.547325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.547451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.547477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.547668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.547693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.547816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.547841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.547959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.547984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.548108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.548133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.548280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.548305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.548425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.548449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.548593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.548618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.548703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.548729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.548884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.548909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.549011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.549036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.549147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.549173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.549321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.549347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.549500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.549525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.549642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.549667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.549824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.549850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.549964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.549988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.550111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.550136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.550245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.550270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.550402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.550427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.550555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.550580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.550702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.550727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.550855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.550881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.550970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.550996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.551079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.551104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.551225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.551251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.551336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.551361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.551457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.551483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.551598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.551623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.551750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.551775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.551899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.551924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.552052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.552078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.552166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.552191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.552320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.552352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.552463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.552489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.410 [2024-07-12 16:03:19.552600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.410 [2024-07-12 16:03:19.552624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.410 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.552751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.552777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.552880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.552905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.553028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.553052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.553173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.553198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.553284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.553309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.553426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.553451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.553544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.553569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.553688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.553713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.553807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.553832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.553916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.553941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.554046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.554071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.554189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.554214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.554339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.554364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.554458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.554482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.554625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.554650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.554764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.554790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.554890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.554915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.555010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.555035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.555154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.555183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.555320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.555345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.555440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.555466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.555600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.555624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.555703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.555728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.555863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.555888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.555983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.556009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.556132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.556157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.556251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.556276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.556424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.556449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.556565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.411 [2024-07-12 16:03:19.556590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.411 qpair failed and we were unable to recover it. 00:26:22.411 [2024-07-12 16:03:19.556713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.556743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.556835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.556860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.556949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.556974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.557074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.557099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.557217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.557242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.557361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.557386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.557498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.557523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.557643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.557669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.557771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.557797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.557916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.557940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.558040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.558065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.558196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.558222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.558371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.558396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.558544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.558570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.558716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.558748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.558843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.558868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.558961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.558990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.559105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.559130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.559248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.559273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.559384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.559410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.559531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.559556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.559644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.559669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.559767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.559793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.559886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.559911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.559996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.560021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.560168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.560193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.560302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.560327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.560422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.560447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.560563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.560588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.560718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.560750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.560858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.560883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.560973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.560998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.561086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.561111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.561228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.561254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.561399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.561423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.412 [2024-07-12 16:03:19.561544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.412 [2024-07-12 16:03:19.561569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.412 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.561687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.561713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.561831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.561857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.561945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.561970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.562054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.562079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.562179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.562204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.562299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.562324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:22.413 [2024-07-12 16:03:19.562453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.562483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.413 [2024-07-12 16:03:19.562605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.562632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.562752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.562777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.562873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.562898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.563028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.563053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.563200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.563226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.563339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.563363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.563492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.563517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.563657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.563681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.563795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.563821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.563909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.563934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.564064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.564089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.564185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.564210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.564329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.564354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.564467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.564492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.564605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.564629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.564762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.564787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.564888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.564912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.565031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.565055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.565180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.565204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.565330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.565355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.565473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.565498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.565613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.565637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.565765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.565790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.565881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.565906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.566027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.566052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.566138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.566163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.566303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.566328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.566447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.413 [2024-07-12 16:03:19.566472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.413 qpair failed and we were unable to recover it. 00:26:22.413 [2024-07-12 16:03:19.566550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.566575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.566720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.566751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.566843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.566868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.566961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.566986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.567127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.567152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.567303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.567327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.567425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.567449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.567597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.567622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.567733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.567790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.567885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.567911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.567999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.568025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.568119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.568148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.568335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.568360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.568493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.568518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.568647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.568672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.568772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.568798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.568891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.568916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.569000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.569025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.569141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.569166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.569278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.569303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.569412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.569437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.569580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.569605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.569689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.569714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.569844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.569870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.569969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.569994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.570139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.570164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.570273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.570298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.570419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.570444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.570629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.570654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.570774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.570799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.570919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.570944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.571036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.571061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.571180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.571206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.571329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.571354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.571437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.571462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.571562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.571587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.571732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.571761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.571857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.571882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.571974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.572003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.572118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.414 [2024-07-12 16:03:19.572143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.414 qpair failed and we were unable to recover it. 00:26:22.414 [2024-07-12 16:03:19.572230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.572255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.572378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.572403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.572606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.572631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.572766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.572792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.572887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.572912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.573022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.573047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.573197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.573222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.573317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.573342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.573458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.573483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.573599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.573625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.573749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.573775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.573876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.573901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.573989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.574014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.574125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.574150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.574273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.574298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.574449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.574474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.574593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.574618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.574729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.574762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.574856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.574881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.574976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.575000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.575086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.575111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.575227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.575252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.575373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.575398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.575531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.575556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.575676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.575701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.575798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.575824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.575979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.576003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.576192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.576217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.576331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.576356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.576514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.576539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.576684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.576709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.576843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.576869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.577000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.577025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.577167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.577192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.577317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.577341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.577491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.577516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.415 qpair failed and we were unable to recover it. 00:26:22.415 [2024-07-12 16:03:19.577615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.415 [2024-07-12 16:03:19.577640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.416 qpair failed and we were unable to recover it. 00:26:22.416 [2024-07-12 16:03:19.577773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.416 [2024-07-12 16:03:19.577799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.416 qpair failed and we were unable to recover it. 00:26:22.416 [2024-07-12 16:03:19.577918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.416 [2024-07-12 16:03:19.577943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.416 qpair failed and we were unable to recover it. 00:26:22.416 [2024-07-12 16:03:19.578059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.416 [2024-07-12 16:03:19.578084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.416 qpair failed and we were unable to recover it. 00:26:22.416 [2024-07-12 16:03:19.578232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.416 [2024-07-12 16:03:19.578257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.416 qpair failed and we were unable to recover it. 00:26:22.416 [2024-07-12 16:03:19.578349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.416 [2024-07-12 16:03:19.578374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.416 qpair failed and we were unable to recover it. 00:26:22.416 [2024-07-12 16:03:19.578515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.416 [2024-07-12 16:03:19.578539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.416 qpair failed and we were unable to recover it. 00:26:22.416 [2024-07-12 16:03:19.578630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.416 [2024-07-12 16:03:19.578654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.416 qpair failed and we were unable to recover it. 00:26:22.416 [2024-07-12 16:03:19.578773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.416 [2024-07-12 16:03:19.578799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.416 qpair failed and we were unable to recover it. 00:26:22.416 [2024-07-12 16:03:19.578893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.416 [2024-07-12 16:03:19.578918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.416 qpair failed and we were unable to recover it. 00:26:22.416 [2024-07-12 16:03:19.579002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.416 [2024-07-12 16:03:19.579026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.579138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.579163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.579280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.579305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.579389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.579421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.579503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.579528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.579679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.579704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.579798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.579823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.579946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.579971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.580102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.580127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.580263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.580288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.580444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.580468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.580585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.580610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.580744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.580769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.580866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.580891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.581011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.581036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.581149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.581173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.581292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.581317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.581464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.581489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.581599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.581623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.581759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.581784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.581885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.581914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.582007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.582032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.582154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.582179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.417 [2024-07-12 16:03:19.582280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.417 [2024-07-12 16:03:19.582305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.417 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.582429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.582453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.582570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.582595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.582750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.582776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.582897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.582922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.583029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.583054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.583165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.583189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.583339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.583364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.583585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.583610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.583726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.583769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.583894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.583919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.584067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.584092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.584234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.584259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.584422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.584447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.584570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.584594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.584706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.584731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.584832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.584856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.584971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.584996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.585116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.585141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.585232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.585257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.585409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.585434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.585538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.585564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.585649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.585673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.585820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.585846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.585960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.585990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.586114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.586139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.586241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.586266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.586396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.586421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.586551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.586575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.586720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.586750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.586838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.586863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.586981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.587006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.587162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 Malloc0 00:26:22.418 [2024-07-12 16:03:19.587187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.587312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.587337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 [2024-07-12 16:03:19.587423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.587448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.418 [2024-07-12 16:03:19.587574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.587599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:22.418 [2024-07-12 16:03:19.587730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.418 [2024-07-12 16:03:19.587811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.418 qpair failed and we were unable to recover it. 00:26:22.418 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.418 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.418 [2024-07-12 16:03:19.588011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.588038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.588133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.588159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.588255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.588281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.588407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.588432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.588570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.588595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.588750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.588776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.588917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.588942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.589134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.589159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.589296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.589332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.589459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.589484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.589678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.589703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.589911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.589938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.590141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.590166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.590305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.590337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.590451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.590475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.590591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.590616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.590748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.590774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.590932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.590932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.419 [2024-07-12 16:03:19.590957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.591128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.591153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.591277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.591301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.591544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.591569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.591799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.591825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.591978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.592003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.592126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.592151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.592268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.592293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.592385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.592409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.592533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.592563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.592690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.592715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.592840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.592865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.593008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.593033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.593149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.593174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.593281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.593306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.593453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.593479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.593670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.593695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.593871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.593896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.593981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.594006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.594148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.594173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.594297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.594322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.594494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.594519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.594682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.594707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.594814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.419 [2024-07-12 16:03:19.594839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.419 qpair failed and we were unable to recover it. 00:26:22.419 [2024-07-12 16:03:19.594983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.595008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.595132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.595157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.595279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.595304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.595478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.595503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.595604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.595629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.595754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.595779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.595860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.595884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.596011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.596036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.596148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.596173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.596287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.596311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.596403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.596428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.596506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.596531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.596871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.596897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.597050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.597075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.597177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.597202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.597330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.597355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.597557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.597582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.597742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.597767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.597934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.597959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.598107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.598132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.598278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.598302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.598493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.598518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.598688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.598713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.598870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.598895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.598990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.599015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.420 [2024-07-12 16:03:19.599170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.599199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:22.420 [2024-07-12 16:03:19.599387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.599412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.420 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.420 [2024-07-12 16:03:19.599573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.599599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.599782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.599808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.599919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.599944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.600091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.600116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.600215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.600239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.600346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.600371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.600517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.600543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.600669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.600694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.600800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.600825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.600977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.601003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.601128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.601159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.601306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.420 [2024-07-12 16:03:19.601332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.420 qpair failed and we were unable to recover it. 00:26:22.420 [2024-07-12 16:03:19.601511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.601536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.601687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.601717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.601875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.601901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.602023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.602047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.602222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.602247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.602407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.602432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.602555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.602580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.602702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.602727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.602868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.602904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.603070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.603095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.603238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.603263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.603379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.603404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.603652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.603681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.603766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.603791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.603940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.603970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.604090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.604115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.604218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.604243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.604396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.604421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.604541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.604566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.604687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.604712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.604843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.604869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.605015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.605040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.605186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.605211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.605330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.605355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.605469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.605494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.605615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.605640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.605824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.605850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.606019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.606044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.606183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.606208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.606352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.606377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.606487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.606512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.606613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.606637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.606802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.606827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 [2024-07-12 16:03:19.607025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.607050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.421 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.421 [2024-07-12 16:03:19.607243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.421 [2024-07-12 16:03:19.607268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.421 qpair failed and we were unable to recover it. 00:26:22.422 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:22.422 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.422 [2024-07-12 16:03:19.607447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.607472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.422 [2024-07-12 16:03:19.607625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.607650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.607832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.607861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.607989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.608020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.608264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.608289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.608393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.608422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.608611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.608636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.608754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.608780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.608889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.608914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.609003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.609027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.609147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.609173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.609319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.609343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.609486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.609511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.609650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.609676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.610018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.610057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.610202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.610226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.610465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.610490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.610598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.610623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.610774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.610799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.610908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.610943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.611076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.611101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.611223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.611248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.611339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.611364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.611482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.611507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.611632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.611657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.611851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.611884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.612124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.612155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.612298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.612322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.612451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.612476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.612618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.612643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.612765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.612791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.612916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.612941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.613035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.613060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.613173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.613198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.613345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.613369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.613488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.613513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.613746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.613771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.613900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.613928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.614080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.614104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.422 qpair failed and we were unable to recover it. 00:26:22.422 [2024-07-12 16:03:19.614209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.422 [2024-07-12 16:03:19.614234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.614355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.614380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.614508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.614532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.614685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.614710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.614842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.614867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.614983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.615008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.423 [2024-07-12 16:03:19.615194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.615219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.615313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:22.423 [2024-07-12 16:03:19.615338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.423 [2024-07-12 16:03:19.615484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.615509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.423 [2024-07-12 16:03:19.615626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.615651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.615817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.615843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.615935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.615960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.616084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.616109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.616243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.616268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.616353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.616378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.616504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.616529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.616652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.616677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.616789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.616814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.617016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.617041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.617147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.617172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.617259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.617295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.617414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.617445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.617597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.617622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.617807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.617833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.617985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.618010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.618157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.618182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.618290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.618315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.618400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.618425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.618535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.618559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.618643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.618672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.618806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.618831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.619035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.423 [2024-07-12 16:03:19.619066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13cc1e0 with addr=10.0.0.2, port=4420 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 [2024-07-12 16:03:19.619186] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.423 [2024-07-12 16:03:19.621568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.423 [2024-07-12 16:03:19.621692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.423 [2024-07-12 16:03:19.621718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.423 [2024-07-12 16:03:19.621753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.423 [2024-07-12 16:03:19.621768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.423 [2024-07-12 16:03:19.621804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.423 qpair failed and we were unable to recover it. 00:26:22.423 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.423 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:22.423 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:22.423 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:22.423 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:22.423 16:03:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 864464 00:26:22.423 [2024-07-12 16:03:19.631573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.423 [2024-07-12 16:03:19.631679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.423 [2024-07-12 16:03:19.631714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.423 [2024-07-12 16:03:19.631728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.423 [2024-07-12 16:03:19.631750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.423 [2024-07-12 16:03:19.631790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.424 qpair failed and we were unable to recover it. 00:26:22.682 [2024-07-12 16:03:19.641512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.682 [2024-07-12 16:03:19.641615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.682 [2024-07-12 16:03:19.641640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.682 [2024-07-12 16:03:19.641654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.682 [2024-07-12 16:03:19.641671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.682 [2024-07-12 16:03:19.641700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.682 qpair failed and we were unable to recover it. 00:26:22.682 [2024-07-12 16:03:19.651545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.682 [2024-07-12 16:03:19.651657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.682 [2024-07-12 16:03:19.651682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.651696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.651708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.651753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.661566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.661656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.661680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.661695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.661707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.661734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.671510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.671598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.671622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.671637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.671649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.671677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.681598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.681708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.681735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.681761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.681773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.681802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.691586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.691751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.691777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.691793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.691805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.691833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.701607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.701715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.701746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.701763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.701775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.701803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.711641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.711791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.711817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.711832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.711844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.711883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.721762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.721867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.721893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.721907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.721919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.721948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.731710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.731828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.731853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.731867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.731885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.731914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.741780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.741872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.741897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.741911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.741924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.741952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.751811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.751897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.751922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.751936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.751948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.751976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.761816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.761899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.761923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.761937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.761949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.761978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.771821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.771933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.771958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.771972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.771984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.772012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.683 qpair failed and we were unable to recover it. 00:26:22.683 [2024-07-12 16:03:19.781878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.683 [2024-07-12 16:03:19.781969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.683 [2024-07-12 16:03:19.781994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.683 [2024-07-12 16:03:19.782008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.683 [2024-07-12 16:03:19.782020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.683 [2024-07-12 16:03:19.782048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.791954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.792057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.792083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.792098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.792110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.792137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.801941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.802029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.802052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.802066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.802078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.802106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.811917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.812006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.812034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.812049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.812061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.812089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.822061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.822182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.822206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.822226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.822239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.822268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.831975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.832059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.832083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.832098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.832110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.832137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.842060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.842187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.842212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.842227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.842239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.842276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.852108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.852214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.852239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.852254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.852266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.852293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.862114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.862215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.862240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.862255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.862267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.862295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.872136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.872249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.872275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.872289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.872301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.872329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.882143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.882270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.882295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.882310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.882322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.882349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.892182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.892288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.892313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.892328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.892341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.892369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.902223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.902337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.902362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.902376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.902389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.902417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.912215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.912313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.912338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.912358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.912371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.912399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.922282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.684 [2024-07-12 16:03:19.922387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.684 [2024-07-12 16:03:19.922411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.684 [2024-07-12 16:03:19.922425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.684 [2024-07-12 16:03:19.922437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.684 [2024-07-12 16:03:19.922465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.684 qpair failed and we were unable to recover it. 00:26:22.684 [2024-07-12 16:03:19.932292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.685 [2024-07-12 16:03:19.932398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.685 [2024-07-12 16:03:19.932423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.685 [2024-07-12 16:03:19.932438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.685 [2024-07-12 16:03:19.932450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.685 [2024-07-12 16:03:19.932478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.685 qpair failed and we were unable to recover it. 00:26:22.685 [2024-07-12 16:03:19.942277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.685 [2024-07-12 16:03:19.942391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.685 [2024-07-12 16:03:19.942418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.685 [2024-07-12 16:03:19.942432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.685 [2024-07-12 16:03:19.942444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.685 [2024-07-12 16:03:19.942472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.685 qpair failed and we were unable to recover it. 00:26:22.685 [2024-07-12 16:03:19.952306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.685 [2024-07-12 16:03:19.952410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.685 [2024-07-12 16:03:19.952436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.685 [2024-07-12 16:03:19.952450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.685 [2024-07-12 16:03:19.952462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.685 [2024-07-12 16:03:19.952490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.685 qpair failed and we were unable to recover it. 00:26:22.685 [2024-07-12 16:03:19.962435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.685 [2024-07-12 16:03:19.962547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.685 [2024-07-12 16:03:19.962573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.685 [2024-07-12 16:03:19.962587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.685 [2024-07-12 16:03:19.962600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.685 [2024-07-12 16:03:19.962628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.685 qpair failed and we were unable to recover it. 00:26:22.685 [2024-07-12 16:03:19.972495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.685 [2024-07-12 16:03:19.972592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.685 [2024-07-12 16:03:19.972617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.685 [2024-07-12 16:03:19.972632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.685 [2024-07-12 16:03:19.972645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.685 [2024-07-12 16:03:19.972684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.685 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:19.982416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:19.982524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:19.982552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:19.982567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:19.982579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:19.982609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:19.992439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:19.992541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:19.992568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:19.992583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:19.992595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:19.992622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:20.002420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:20.002511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:20.002535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:20.002555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:20.002568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:20.002597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:20.012523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:20.012649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:20.012677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:20.012692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:20.012705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:20.012753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:20.022631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:20.022764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:20.022798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:20.022814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:20.022827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:20.022858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:20.032585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:20.032687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:20.032716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:20.032731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:20.032751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:20.032780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:20.042658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:20.042775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:20.042801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:20.042816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:20.042829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:20.042857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:20.052650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:20.052766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:20.052792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:20.052807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:20.052819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:20.052847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:20.062665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:20.062770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:20.062794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:20.062808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:20.062821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:20.062849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:20.072670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:20.072777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:20.072801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:20.072815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:20.072828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:20.072856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:20.082793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:20.082886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:20.082915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:20.082938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:20.082951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:20.082979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:20.092728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:20.092861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:20.092895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:20.092911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:20.092924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:20.092956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:20.102777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:20.102873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:20.102899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:20.102914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.944 [2024-07-12 16:03:20.102926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.944 [2024-07-12 16:03:20.102955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.944 qpair failed and we were unable to recover it. 00:26:22.944 [2024-07-12 16:03:20.112786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.944 [2024-07-12 16:03:20.112877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.944 [2024-07-12 16:03:20.112903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.944 [2024-07-12 16:03:20.112918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.112930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.112964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:22.945 [2024-07-12 16:03:20.122815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.945 [2024-07-12 16:03:20.122901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.945 [2024-07-12 16:03:20.122924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.945 [2024-07-12 16:03:20.122939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.122951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.122979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:22.945 [2024-07-12 16:03:20.132846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.945 [2024-07-12 16:03:20.132958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.945 [2024-07-12 16:03:20.132984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.945 [2024-07-12 16:03:20.132999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.133011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.133045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:22.945 [2024-07-12 16:03:20.142860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.945 [2024-07-12 16:03:20.142990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.945 [2024-07-12 16:03:20.143016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.945 [2024-07-12 16:03:20.143031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.143044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.143072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:22.945 [2024-07-12 16:03:20.152870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.945 [2024-07-12 16:03:20.152995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.945 [2024-07-12 16:03:20.153019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.945 [2024-07-12 16:03:20.153033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.153045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.153073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:22.945 [2024-07-12 16:03:20.162898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.945 [2024-07-12 16:03:20.163028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.945 [2024-07-12 16:03:20.163069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.945 [2024-07-12 16:03:20.163083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.163095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.163134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:22.945 [2024-07-12 16:03:20.172961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.945 [2024-07-12 16:03:20.173088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.945 [2024-07-12 16:03:20.173113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.945 [2024-07-12 16:03:20.173128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.173140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.173168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:22.945 [2024-07-12 16:03:20.182992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.945 [2024-07-12 16:03:20.183101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.945 [2024-07-12 16:03:20.183130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.945 [2024-07-12 16:03:20.183144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.183156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.183184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:22.945 [2024-07-12 16:03:20.193033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.945 [2024-07-12 16:03:20.193120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.945 [2024-07-12 16:03:20.193144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.945 [2024-07-12 16:03:20.193158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.193170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.193197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:22.945 [2024-07-12 16:03:20.203077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.945 [2024-07-12 16:03:20.203188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.945 [2024-07-12 16:03:20.203211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.945 [2024-07-12 16:03:20.203225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.203236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.203263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:22.945 [2024-07-12 16:03:20.213080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.945 [2024-07-12 16:03:20.213166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.945 [2024-07-12 16:03:20.213191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.945 [2024-07-12 16:03:20.213207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.213219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.213247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:22.945 [2024-07-12 16:03:20.223074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.945 [2024-07-12 16:03:20.223183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.945 [2024-07-12 16:03:20.223208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.945 [2024-07-12 16:03:20.223222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.223234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.223268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:22.945 [2024-07-12 16:03:20.233138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.945 [2024-07-12 16:03:20.233234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.945 [2024-07-12 16:03:20.233260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.945 [2024-07-12 16:03:20.233276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.945 [2024-07-12 16:03:20.233289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:22.945 [2024-07-12 16:03:20.233318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:22.945 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.243127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.243222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.243248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.243263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.243277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.243305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.253173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.253271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.253296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.253311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.253324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.253352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.263165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.263256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.263280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.263295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.263307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.263336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.273198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.273294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.273328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.273344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.273357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.273384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.283227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.283345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.283369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.283383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.283395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.283423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.293335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.293421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.293445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.293459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.293472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.293499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.303371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.303496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.303520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.303535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.303548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.303575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.313354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.313462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.313487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.313502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.313515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.313547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.323332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.323421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.323446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.323461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.323474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.323502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.333371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.333460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.333483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.333498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.333511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.333538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.343347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.343450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.343473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.343488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.343500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.343528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.353424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.353524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.353547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.353562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.353575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.353602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.363409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.363533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.363563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.363578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.204 [2024-07-12 16:03:20.363591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.204 [2024-07-12 16:03:20.363620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.204 qpair failed and we were unable to recover it. 00:26:23.204 [2024-07-12 16:03:20.373460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.204 [2024-07-12 16:03:20.373547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.204 [2024-07-12 16:03:20.373572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.204 [2024-07-12 16:03:20.373587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.373600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.373627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.205 [2024-07-12 16:03:20.383505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.205 [2024-07-12 16:03:20.383594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.205 [2024-07-12 16:03:20.383617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.205 [2024-07-12 16:03:20.383632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.383644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.383672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.205 [2024-07-12 16:03:20.393647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.205 [2024-07-12 16:03:20.393734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.205 [2024-07-12 16:03:20.393787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.205 [2024-07-12 16:03:20.393802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.393816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.393844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.205 [2024-07-12 16:03:20.403589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.205 [2024-07-12 16:03:20.403688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.205 [2024-07-12 16:03:20.403712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.205 [2024-07-12 16:03:20.403749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.403769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.403800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.205 [2024-07-12 16:03:20.413613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.205 [2024-07-12 16:03:20.413706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.205 [2024-07-12 16:03:20.413729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.205 [2024-07-12 16:03:20.413767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.413781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.413811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.205 [2024-07-12 16:03:20.423728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.205 [2024-07-12 16:03:20.423886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.205 [2024-07-12 16:03:20.423909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.205 [2024-07-12 16:03:20.423924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.423937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.423966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.205 [2024-07-12 16:03:20.433718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.205 [2024-07-12 16:03:20.433848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.205 [2024-07-12 16:03:20.433874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.205 [2024-07-12 16:03:20.433890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.433902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.433931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.205 [2024-07-12 16:03:20.443755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.205 [2024-07-12 16:03:20.443850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.205 [2024-07-12 16:03:20.443875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.205 [2024-07-12 16:03:20.443890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.443902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.443931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.205 [2024-07-12 16:03:20.453793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.205 [2024-07-12 16:03:20.453913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.205 [2024-07-12 16:03:20.453938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.205 [2024-07-12 16:03:20.453954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.453967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.453997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.205 [2024-07-12 16:03:20.463752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.205 [2024-07-12 16:03:20.463879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.205 [2024-07-12 16:03:20.463905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.205 [2024-07-12 16:03:20.463921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.463933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.463962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.205 [2024-07-12 16:03:20.473777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.205 [2024-07-12 16:03:20.473874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.205 [2024-07-12 16:03:20.473899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.205 [2024-07-12 16:03:20.473914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.473928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.473957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.205 [2024-07-12 16:03:20.483850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.205 [2024-07-12 16:03:20.483985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.205 [2024-07-12 16:03:20.484011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.205 [2024-07-12 16:03:20.484043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.484057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.484084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.205 [2024-07-12 16:03:20.493828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.205 [2024-07-12 16:03:20.493975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.205 [2024-07-12 16:03:20.494004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.205 [2024-07-12 16:03:20.494021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.205 [2024-07-12 16:03:20.494039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.205 [2024-07-12 16:03:20.494070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.205 qpair failed and we were unable to recover it. 00:26:23.464 [2024-07-12 16:03:20.503854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.464 [2024-07-12 16:03:20.503946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.464 [2024-07-12 16:03:20.503973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.464 [2024-07-12 16:03:20.503988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.464 [2024-07-12 16:03:20.504002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.464 [2024-07-12 16:03:20.504032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.464 qpair failed and we were unable to recover it. 00:26:23.464 [2024-07-12 16:03:20.513883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.464 [2024-07-12 16:03:20.513969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.464 [2024-07-12 16:03:20.513994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.464 [2024-07-12 16:03:20.514011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.464 [2024-07-12 16:03:20.514024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.464 [2024-07-12 16:03:20.514067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.464 qpair failed and we were unable to recover it. 00:26:23.464 [2024-07-12 16:03:20.523951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.464 [2024-07-12 16:03:20.524041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.464 [2024-07-12 16:03:20.524082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.464 [2024-07-12 16:03:20.524096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.464 [2024-07-12 16:03:20.524109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.464 [2024-07-12 16:03:20.524137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.464 qpair failed and we were unable to recover it. 00:26:23.464 [2024-07-12 16:03:20.534078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.464 [2024-07-12 16:03:20.534167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.464 [2024-07-12 16:03:20.534191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.464 [2024-07-12 16:03:20.534206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.464 [2024-07-12 16:03:20.534219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.464 [2024-07-12 16:03:20.534247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.464 qpair failed and we were unable to recover it. 00:26:23.464 [2024-07-12 16:03:20.543964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.464 [2024-07-12 16:03:20.544059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.464 [2024-07-12 16:03:20.544086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.464 [2024-07-12 16:03:20.544116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.464 [2024-07-12 16:03:20.544129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.464 [2024-07-12 16:03:20.544157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.464 qpair failed and we were unable to recover it. 00:26:23.464 [2024-07-12 16:03:20.553977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.464 [2024-07-12 16:03:20.554115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.464 [2024-07-12 16:03:20.554142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.464 [2024-07-12 16:03:20.554157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.464 [2024-07-12 16:03:20.554169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.464 [2024-07-12 16:03:20.554198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.464 qpair failed and we were unable to recover it. 00:26:23.464 [2024-07-12 16:03:20.564032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.464 [2024-07-12 16:03:20.564114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.464 [2024-07-12 16:03:20.564138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.464 [2024-07-12 16:03:20.564153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.564165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.564193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.574055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.574165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.574188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.574203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.574215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.574243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.584063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.584195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.584221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.584241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.584254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.584282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.594130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.594236] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.594260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.594275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.594287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.594315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.604171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.604258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.604282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.604296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.604310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.604338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.614220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.614310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.614336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.614351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.614363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.614392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.624195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.624314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.624338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.624353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.624365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.624392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.634262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.634349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.634374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.634389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.634401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.634429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.644241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.644336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.644360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.644375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.644388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.644417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.654371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.654489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.654512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.654527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.654539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.654580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.664307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.664414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.664439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.664454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.664467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.664496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.674342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.674449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.674472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.674492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.674505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.674534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.684424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.684553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.684576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.684591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.684603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.684631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.694453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.694577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.694600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.694615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.694628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.694655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.465 [2024-07-12 16:03:20.704430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.465 [2024-07-12 16:03:20.704522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.465 [2024-07-12 16:03:20.704545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.465 [2024-07-12 16:03:20.704560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.465 [2024-07-12 16:03:20.704572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.465 [2024-07-12 16:03:20.704600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.465 qpair failed and we were unable to recover it. 00:26:23.466 [2024-07-12 16:03:20.714462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.466 [2024-07-12 16:03:20.714553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.466 [2024-07-12 16:03:20.714577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.466 [2024-07-12 16:03:20.714592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.466 [2024-07-12 16:03:20.714605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.466 [2024-07-12 16:03:20.714633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.466 qpair failed and we were unable to recover it. 00:26:23.466 [2024-07-12 16:03:20.724534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.466 [2024-07-12 16:03:20.724630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.466 [2024-07-12 16:03:20.724654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.466 [2024-07-12 16:03:20.724669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.466 [2024-07-12 16:03:20.724681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.466 [2024-07-12 16:03:20.724710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.466 qpair failed and we were unable to recover it. 00:26:23.466 [2024-07-12 16:03:20.734545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.466 [2024-07-12 16:03:20.734637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.466 [2024-07-12 16:03:20.734661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.466 [2024-07-12 16:03:20.734676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.466 [2024-07-12 16:03:20.734688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.466 [2024-07-12 16:03:20.734716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.466 qpair failed and we were unable to recover it. 00:26:23.466 [2024-07-12 16:03:20.744539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.466 [2024-07-12 16:03:20.744643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.466 [2024-07-12 16:03:20.744667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.466 [2024-07-12 16:03:20.744682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.466 [2024-07-12 16:03:20.744695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.466 [2024-07-12 16:03:20.744748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.466 qpair failed and we were unable to recover it. 00:26:23.466 [2024-07-12 16:03:20.754573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.466 [2024-07-12 16:03:20.754674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.466 [2024-07-12 16:03:20.754701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.466 [2024-07-12 16:03:20.754716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.466 [2024-07-12 16:03:20.754729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.466 [2024-07-12 16:03:20.754768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.466 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.764620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.764713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.764764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.764786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.764800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.764831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.774638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.774733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.774780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.774796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.774808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.774837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.784670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.784778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.784803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.784818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.784831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.784870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.794681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.794827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.794852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.794867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.794880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.794909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.804782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.804874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.804899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.804915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.804927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.804956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.814764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.814859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.814884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.814899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.814912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.814940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.824712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.824830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.824854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.824868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.824881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.824910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.834794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.834907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.834932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.834947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.834960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.834989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.844829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.844919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.844944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.844959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.844972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.845002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.854867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.854965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.854997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.855013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.855027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.855071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.864923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.865016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.865041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.865071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.865084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.865111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.874957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.875069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.875094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.875108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.875121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.875149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.885022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.885144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.885169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.725 [2024-07-12 16:03:20.885183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.725 [2024-07-12 16:03:20.885196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.725 [2024-07-12 16:03:20.885224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.725 qpair failed and we were unable to recover it. 00:26:23.725 [2024-07-12 16:03:20.895008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.725 [2024-07-12 16:03:20.895137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.725 [2024-07-12 16:03:20.895161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:20.895176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:20.895189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:20.895217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.726 [2024-07-12 16:03:20.904980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.726 [2024-07-12 16:03:20.905105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.726 [2024-07-12 16:03:20.905130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:20.905145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:20.905157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:20.905186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.726 [2024-07-12 16:03:20.915049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.726 [2024-07-12 16:03:20.915150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.726 [2024-07-12 16:03:20.915175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:20.915190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:20.915204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:20.915243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.726 [2024-07-12 16:03:20.925108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.726 [2024-07-12 16:03:20.925205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.726 [2024-07-12 16:03:20.925229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:20.925244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:20.925256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:20.925285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.726 [2024-07-12 16:03:20.935137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.726 [2024-07-12 16:03:20.935228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.726 [2024-07-12 16:03:20.935252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:20.935266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:20.935279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:20.935306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.726 [2024-07-12 16:03:20.945119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.726 [2024-07-12 16:03:20.945206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.726 [2024-07-12 16:03:20.945235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:20.945250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:20.945263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:20.945292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.726 [2024-07-12 16:03:20.955104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.726 [2024-07-12 16:03:20.955191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.726 [2024-07-12 16:03:20.955216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:20.955230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:20.955243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:20.955271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.726 [2024-07-12 16:03:20.965160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.726 [2024-07-12 16:03:20.965259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.726 [2024-07-12 16:03:20.965285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:20.965300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:20.965313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:20.965340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.726 [2024-07-12 16:03:20.975269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.726 [2024-07-12 16:03:20.975360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.726 [2024-07-12 16:03:20.975384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:20.975398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:20.975411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:20.975439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.726 [2024-07-12 16:03:20.985272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.726 [2024-07-12 16:03:20.985359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.726 [2024-07-12 16:03:20.985383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:20.985398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:20.985410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:20.985443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.726 [2024-07-12 16:03:20.995262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.726 [2024-07-12 16:03:20.995353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.726 [2024-07-12 16:03:20.995377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:20.995391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:20.995403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:20.995431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.726 [2024-07-12 16:03:21.005282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.726 [2024-07-12 16:03:21.005365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.726 [2024-07-12 16:03:21.005389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:21.005403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:21.005415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:21.005443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.726 [2024-07-12 16:03:21.015358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.726 [2024-07-12 16:03:21.015490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.726 [2024-07-12 16:03:21.015532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.726 [2024-07-12 16:03:21.015548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.726 [2024-07-12 16:03:21.015561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.726 [2024-07-12 16:03:21.015592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.726 qpair failed and we were unable to recover it. 00:26:23.984 [2024-07-12 16:03:21.025334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.984 [2024-07-12 16:03:21.025448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.984 [2024-07-12 16:03:21.025475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.984 [2024-07-12 16:03:21.025490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.984 [2024-07-12 16:03:21.025503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.984 [2024-07-12 16:03:21.025532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-07-12 16:03:21.035395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.984 [2024-07-12 16:03:21.035523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.984 [2024-07-12 16:03:21.035552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.984 [2024-07-12 16:03:21.035568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.984 [2024-07-12 16:03:21.035581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.984 [2024-07-12 16:03:21.035609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.984 qpair failed and we were unable to recover it. 00:26:23.984 [2024-07-12 16:03:21.045410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.984 [2024-07-12 16:03:21.045500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.985 [2024-07-12 16:03:21.045524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.985 [2024-07-12 16:03:21.045539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.985 [2024-07-12 16:03:21.045551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.985 [2024-07-12 16:03:21.045580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-07-12 16:03:21.055459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.985 [2024-07-12 16:03:21.055589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.985 [2024-07-12 16:03:21.055613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.985 [2024-07-12 16:03:21.055628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.985 [2024-07-12 16:03:21.055640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.985 [2024-07-12 16:03:21.055668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-07-12 16:03:21.065465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.985 [2024-07-12 16:03:21.065553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.985 [2024-07-12 16:03:21.065578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.985 [2024-07-12 16:03:21.065592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.985 [2024-07-12 16:03:21.065605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.985 [2024-07-12 16:03:21.065633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-07-12 16:03:21.075480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.985 [2024-07-12 16:03:21.075609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.985 [2024-07-12 16:03:21.075633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.985 [2024-07-12 16:03:21.075648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.985 [2024-07-12 16:03:21.075661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.985 [2024-07-12 16:03:21.075693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-07-12 16:03:21.085496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.985 [2024-07-12 16:03:21.085587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.985 [2024-07-12 16:03:21.085611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.985 [2024-07-12 16:03:21.085625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.985 [2024-07-12 16:03:21.085638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.985 [2024-07-12 16:03:21.085666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-07-12 16:03:21.095594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.985 [2024-07-12 16:03:21.095683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.985 [2024-07-12 16:03:21.095707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.985 [2024-07-12 16:03:21.095722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.985 [2024-07-12 16:03:21.095735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.985 [2024-07-12 16:03:21.095786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-07-12 16:03:21.105548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.985 [2024-07-12 16:03:21.105648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.985 [2024-07-12 16:03:21.105671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.985 [2024-07-12 16:03:21.105686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.985 [2024-07-12 16:03:21.105698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.985 [2024-07-12 16:03:21.105747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-07-12 16:03:21.115582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.985 [2024-07-12 16:03:21.115672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.985 [2024-07-12 16:03:21.115696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.985 [2024-07-12 16:03:21.115710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.985 [2024-07-12 16:03:21.115744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.985 [2024-07-12 16:03:21.115774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-07-12 16:03:21.125616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.985 [2024-07-12 16:03:21.125754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.985 [2024-07-12 16:03:21.125784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.985 [2024-07-12 16:03:21.125800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.985 [2024-07-12 16:03:21.125813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.985 [2024-07-12 16:03:21.125842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-07-12 16:03:21.135651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.985 [2024-07-12 16:03:21.135814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.985 [2024-07-12 16:03:21.135839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.985 [2024-07-12 16:03:21.135854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.985 [2024-07-12 16:03:21.135869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.985 [2024-07-12 16:03:21.135899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-07-12 16:03:21.145703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.985 [2024-07-12 16:03:21.145832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.985 [2024-07-12 16:03:21.145857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.985 [2024-07-12 16:03:21.145872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.985 [2024-07-12 16:03:21.145886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.985 [2024-07-12 16:03:21.145915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.985 [2024-07-12 16:03:21.155665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.985 [2024-07-12 16:03:21.155785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.985 [2024-07-12 16:03:21.155811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.985 [2024-07-12 16:03:21.155826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.985 [2024-07-12 16:03:21.155841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.985 [2024-07-12 16:03:21.155871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.985 qpair failed and we were unable to recover it. 00:26:23.986 [2024-07-12 16:03:21.165764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.986 [2024-07-12 16:03:21.165861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.986 [2024-07-12 16:03:21.165887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.986 [2024-07-12 16:03:21.165903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.986 [2024-07-12 16:03:21.165923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.986 [2024-07-12 16:03:21.165952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-07-12 16:03:21.175817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.986 [2024-07-12 16:03:21.175914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.986 [2024-07-12 16:03:21.175940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.986 [2024-07-12 16:03:21.175956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.986 [2024-07-12 16:03:21.175969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.986 [2024-07-12 16:03:21.176010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-07-12 16:03:21.185803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.986 [2024-07-12 16:03:21.185900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.986 [2024-07-12 16:03:21.185926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.986 [2024-07-12 16:03:21.185942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.986 [2024-07-12 16:03:21.185954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.986 [2024-07-12 16:03:21.185983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-07-12 16:03:21.195814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.986 [2024-07-12 16:03:21.195911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.986 [2024-07-12 16:03:21.195936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.986 [2024-07-12 16:03:21.195951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.986 [2024-07-12 16:03:21.195964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.986 [2024-07-12 16:03:21.195993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-07-12 16:03:21.205998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.986 [2024-07-12 16:03:21.206101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.986 [2024-07-12 16:03:21.206124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.986 [2024-07-12 16:03:21.206138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.986 [2024-07-12 16:03:21.206150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.986 [2024-07-12 16:03:21.206178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-07-12 16:03:21.215954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.986 [2024-07-12 16:03:21.216089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.986 [2024-07-12 16:03:21.216115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.986 [2024-07-12 16:03:21.216130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.986 [2024-07-12 16:03:21.216143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.986 [2024-07-12 16:03:21.216171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-07-12 16:03:21.225942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.986 [2024-07-12 16:03:21.226058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.986 [2024-07-12 16:03:21.226084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.986 [2024-07-12 16:03:21.226099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.986 [2024-07-12 16:03:21.226111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.986 [2024-07-12 16:03:21.226139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-07-12 16:03:21.235954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.986 [2024-07-12 16:03:21.236071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.986 [2024-07-12 16:03:21.236097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.986 [2024-07-12 16:03:21.236111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.986 [2024-07-12 16:03:21.236124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.986 [2024-07-12 16:03:21.236151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-07-12 16:03:21.245976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.986 [2024-07-12 16:03:21.246075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.986 [2024-07-12 16:03:21.246098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.986 [2024-07-12 16:03:21.246113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.986 [2024-07-12 16:03:21.246126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.986 [2024-07-12 16:03:21.246154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-07-12 16:03:21.255995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.986 [2024-07-12 16:03:21.256118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.986 [2024-07-12 16:03:21.256143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.986 [2024-07-12 16:03:21.256158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.986 [2024-07-12 16:03:21.256176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.986 [2024-07-12 16:03:21.256204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-07-12 16:03:21.266059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.986 [2024-07-12 16:03:21.266162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.986 [2024-07-12 16:03:21.266185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.986 [2024-07-12 16:03:21.266199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.986 [2024-07-12 16:03:21.266212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.986 [2024-07-12 16:03:21.266239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.986 qpair failed and we were unable to recover it. 00:26:23.986 [2024-07-12 16:03:21.276032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.986 [2024-07-12 16:03:21.276131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.986 [2024-07-12 16:03:21.276158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.986 [2024-07-12 16:03:21.276174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.986 [2024-07-12 16:03:21.276187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:23.986 [2024-07-12 16:03:21.276217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:23.986 qpair failed and we were unable to recover it. 00:26:24.245 [2024-07-12 16:03:21.286110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.245 [2024-07-12 16:03:21.286201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.245 [2024-07-12 16:03:21.286228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.245 [2024-07-12 16:03:21.286243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.245 [2024-07-12 16:03:21.286256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.245 [2024-07-12 16:03:21.286286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.245 qpair failed and we were unable to recover it. 00:26:24.245 [2024-07-12 16:03:21.296135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.245 [2024-07-12 16:03:21.296232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.245 [2024-07-12 16:03:21.296256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.245 [2024-07-12 16:03:21.296271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.245 [2024-07-12 16:03:21.296283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.245 [2024-07-12 16:03:21.296311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.245 qpair failed and we were unable to recover it. 00:26:24.245 [2024-07-12 16:03:21.306116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.245 [2024-07-12 16:03:21.306215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.245 [2024-07-12 16:03:21.306239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.245 [2024-07-12 16:03:21.306253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.245 [2024-07-12 16:03:21.306265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.245 [2024-07-12 16:03:21.306292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.245 qpair failed and we were unable to recover it. 00:26:24.245 [2024-07-12 16:03:21.316170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.245 [2024-07-12 16:03:21.316253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.245 [2024-07-12 16:03:21.316277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.245 [2024-07-12 16:03:21.316291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.245 [2024-07-12 16:03:21.316304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.245 [2024-07-12 16:03:21.316330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.245 qpair failed and we were unable to recover it. 00:26:24.245 [2024-07-12 16:03:21.326198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.245 [2024-07-12 16:03:21.326282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.245 [2024-07-12 16:03:21.326306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.245 [2024-07-12 16:03:21.326319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.245 [2024-07-12 16:03:21.326332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.245 [2024-07-12 16:03:21.326360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.245 qpair failed and we were unable to recover it. 00:26:24.245 [2024-07-12 16:03:21.336304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.245 [2024-07-12 16:03:21.336438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.245 [2024-07-12 16:03:21.336464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.245 [2024-07-12 16:03:21.336479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.245 [2024-07-12 16:03:21.336492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.245 [2024-07-12 16:03:21.336519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.245 qpair failed and we were unable to recover it. 00:26:24.245 [2024-07-12 16:03:21.346321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.245 [2024-07-12 16:03:21.346461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.245 [2024-07-12 16:03:21.346487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.245 [2024-07-12 16:03:21.346502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.245 [2024-07-12 16:03:21.346519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.245 [2024-07-12 16:03:21.346548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.245 qpair failed and we were unable to recover it. 00:26:24.245 [2024-07-12 16:03:21.356271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.245 [2024-07-12 16:03:21.356397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.245 [2024-07-12 16:03:21.356422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.245 [2024-07-12 16:03:21.356437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.245 [2024-07-12 16:03:21.356450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.245 [2024-07-12 16:03:21.356478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.245 qpair failed and we were unable to recover it. 00:26:24.245 [2024-07-12 16:03:21.366299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.245 [2024-07-12 16:03:21.366409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.366434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.366449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.366462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.366490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.376339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.376430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.376454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.376468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.376481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.376508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.386422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.386521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.386544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.386558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.386570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.386598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.396444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.396536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.396559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.396573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.396586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.396614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.406391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.406485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.406520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.406535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.406547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.406575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.416473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.416565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.416589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.416604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.416616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.416644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.426495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.426579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.426602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.426617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.426629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.426657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.436524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.436611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.436635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.436655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.436668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.436696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.446543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.446643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.446673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.446688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.446700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.446728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.456535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.456631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.456655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.456669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.456682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.456709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.466558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.466654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.466677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.466691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.466704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.466731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.476591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.476678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.476702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.476716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.476729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.476782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.486649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.486775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.486800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.486815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.486828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.486857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.496701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.246 [2024-07-12 16:03:21.496827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.246 [2024-07-12 16:03:21.496853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.246 [2024-07-12 16:03:21.496869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.246 [2024-07-12 16:03:21.496882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.246 [2024-07-12 16:03:21.496910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.246 qpair failed and we were unable to recover it. 00:26:24.246 [2024-07-12 16:03:21.506676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.247 [2024-07-12 16:03:21.506806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.247 [2024-07-12 16:03:21.506832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.247 [2024-07-12 16:03:21.506848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.247 [2024-07-12 16:03:21.506861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.247 [2024-07-12 16:03:21.506890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.247 qpair failed and we were unable to recover it. 00:26:24.247 [2024-07-12 16:03:21.516766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.247 [2024-07-12 16:03:21.516874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.247 [2024-07-12 16:03:21.516900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.247 [2024-07-12 16:03:21.516916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.247 [2024-07-12 16:03:21.516929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.247 [2024-07-12 16:03:21.516958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.247 qpair failed and we were unable to recover it. 00:26:24.247 [2024-07-12 16:03:21.526798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.247 [2024-07-12 16:03:21.526885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.247 [2024-07-12 16:03:21.526910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.247 [2024-07-12 16:03:21.526931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.247 [2024-07-12 16:03:21.526944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.247 [2024-07-12 16:03:21.526974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.247 qpair failed and we were unable to recover it. 00:26:24.247 [2024-07-12 16:03:21.536825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.247 [2024-07-12 16:03:21.536963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.247 [2024-07-12 16:03:21.536992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.247 [2024-07-12 16:03:21.537008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.247 [2024-07-12 16:03:21.537029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.247 [2024-07-12 16:03:21.537060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.247 qpair failed and we were unable to recover it. 00:26:24.506 [2024-07-12 16:03:21.546837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.506 [2024-07-12 16:03:21.546956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.506 [2024-07-12 16:03:21.546984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.506 [2024-07-12 16:03:21.547000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.506 [2024-07-12 16:03:21.547013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.506 [2024-07-12 16:03:21.547042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.506 qpair failed and we were unable to recover it. 00:26:24.506 [2024-07-12 16:03:21.556883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.506 [2024-07-12 16:03:21.556977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.506 [2024-07-12 16:03:21.557002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.506 [2024-07-12 16:03:21.557031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.506 [2024-07-12 16:03:21.557044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.506 [2024-07-12 16:03:21.557073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.506 qpair failed and we were unable to recover it. 00:26:24.506 [2024-07-12 16:03:21.566922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.506 [2024-07-12 16:03:21.567018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.506 [2024-07-12 16:03:21.567058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.506 [2024-07-12 16:03:21.567073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.506 [2024-07-12 16:03:21.567086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.506 [2024-07-12 16:03:21.567114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.506 qpair failed and we were unable to recover it. 00:26:24.506 [2024-07-12 16:03:21.576956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.506 [2024-07-12 16:03:21.577102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.506 [2024-07-12 16:03:21.577127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.506 [2024-07-12 16:03:21.577143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.506 [2024-07-12 16:03:21.577155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.506 [2024-07-12 16:03:21.577188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.506 qpair failed and we were unable to recover it. 00:26:24.506 [2024-07-12 16:03:21.586972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.506 [2024-07-12 16:03:21.587079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.506 [2024-07-12 16:03:21.587103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.506 [2024-07-12 16:03:21.587117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.506 [2024-07-12 16:03:21.587130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.506 [2024-07-12 16:03:21.587157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.506 qpair failed and we were unable to recover it. 00:26:24.506 [2024-07-12 16:03:21.597007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.506 [2024-07-12 16:03:21.597114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.506 [2024-07-12 16:03:21.597137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.506 [2024-07-12 16:03:21.597151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.506 [2024-07-12 16:03:21.597164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.506 [2024-07-12 16:03:21.597191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.506 qpair failed and we were unable to recover it. 00:26:24.506 [2024-07-12 16:03:21.607025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.506 [2024-07-12 16:03:21.607140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.506 [2024-07-12 16:03:21.607164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.506 [2024-07-12 16:03:21.607178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.506 [2024-07-12 16:03:21.607190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.506 [2024-07-12 16:03:21.607218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.506 qpair failed and we were unable to recover it. 00:26:24.506 [2024-07-12 16:03:21.617166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.506 [2024-07-12 16:03:21.617280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.506 [2024-07-12 16:03:21.617307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.506 [2024-07-12 16:03:21.617330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.506 [2024-07-12 16:03:21.617344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.506 [2024-07-12 16:03:21.617382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.506 qpair failed and we were unable to recover it. 00:26:24.506 [2024-07-12 16:03:21.627057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.506 [2024-07-12 16:03:21.627191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.506 [2024-07-12 16:03:21.627238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.506 [2024-07-12 16:03:21.627254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.506 [2024-07-12 16:03:21.627266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.506 [2024-07-12 16:03:21.627294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.506 qpair failed and we were unable to recover it. 00:26:24.506 [2024-07-12 16:03:21.637126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.637218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.637242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.637257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.637269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.637297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.647066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.647168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.647192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.647207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.647219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.647246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.657211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.657321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.657344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.657358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.657371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.657399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.667148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.667239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.667263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.667277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.667289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.667318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.677313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.677397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.677423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.677438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.677450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.677477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.687274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.687367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.687390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.687404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.687419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.687446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.697277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.697373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.697397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.697410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.697423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.697450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.707316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.707423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.707451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.707466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.707478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.707506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.717418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.717507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.717532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.717547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.717560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.717588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.727299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.727391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.727414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.727428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.727440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.727467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.737395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.737503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.737528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.737543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.737555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.737582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.747359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.747449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.747475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.747489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.747502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.747534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.757381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.757471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.757494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.757509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.757521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.757549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.767424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.767516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.767539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.767553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.507 [2024-07-12 16:03:21.767564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.507 [2024-07-12 16:03:21.767592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.507 qpair failed and we were unable to recover it. 00:26:24.507 [2024-07-12 16:03:21.777466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.507 [2024-07-12 16:03:21.777564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.507 [2024-07-12 16:03:21.777587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.507 [2024-07-12 16:03:21.777601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.508 [2024-07-12 16:03:21.777614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.508 [2024-07-12 16:03:21.777641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.508 qpair failed and we were unable to recover it. 00:26:24.508 [2024-07-12 16:03:21.787525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.508 [2024-07-12 16:03:21.787627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.508 [2024-07-12 16:03:21.787651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.508 [2024-07-12 16:03:21.787665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.508 [2024-07-12 16:03:21.787678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.508 [2024-07-12 16:03:21.787706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.508 qpair failed and we were unable to recover it. 00:26:24.508 [2024-07-12 16:03:21.797629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.508 [2024-07-12 16:03:21.797763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.508 [2024-07-12 16:03:21.797809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.508 [2024-07-12 16:03:21.797826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.508 [2024-07-12 16:03:21.797839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.508 [2024-07-12 16:03:21.797878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.508 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.807591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.807696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.807747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.807765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.807778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.767 [2024-07-12 16:03:21.807819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.767 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.817630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.817754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.817782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.817798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.817811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.767 [2024-07-12 16:03:21.817851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.767 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.827637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.827751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.827778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.827794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.827807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.767 [2024-07-12 16:03:21.827836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.767 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.837631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.837744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.837769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.837784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.837797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.767 [2024-07-12 16:03:21.837832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.767 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.847651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.847759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.847783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.847798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.847811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.767 [2024-07-12 16:03:21.847841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.767 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.857687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.857797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.857822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.857838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.857850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.767 [2024-07-12 16:03:21.857877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.767 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.867766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.867910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.867936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.867952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.867966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.767 [2024-07-12 16:03:21.867995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.767 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.877786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.877879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.877903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.877918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.877931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.767 [2024-07-12 16:03:21.877959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.767 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.887780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.887865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.887895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.887910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.887923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.767 [2024-07-12 16:03:21.887953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.767 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.897808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.897904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.897927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.897942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.897954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.767 [2024-07-12 16:03:21.897982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.767 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.907857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.907948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.907971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.907986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.907998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.767 [2024-07-12 16:03:21.908044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.767 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.917906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.917992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.918017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.918032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.918044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.767 [2024-07-12 16:03:21.918073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.767 qpair failed and we were unable to recover it. 00:26:24.767 [2024-07-12 16:03:21.927976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.767 [2024-07-12 16:03:21.928079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.767 [2024-07-12 16:03:21.928103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.767 [2024-07-12 16:03:21.928117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.767 [2024-07-12 16:03:21.928134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:21.928174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:21.937981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:21.938078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:21.938101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:21.938115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:21.938127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:21.938155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:21.947988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:21.948100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:21.948125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:21.948140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:21.948153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:21.948180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:21.958094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:21.958177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:21.958200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:21.958214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:21.958226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:21.958254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:21.968071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:21.968150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:21.968175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:21.968190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:21.968203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:21.968231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:21.978029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:21.978149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:21.978174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:21.978189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:21.978201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:21.978228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:21.988138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:21.988226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:21.988251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:21.988265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:21.988278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:21.988315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:21.998125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:21.998259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:21.998285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:21.998300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:21.998312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:21.998340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:22.008161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:22.008292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:22.008318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:22.008333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:22.008346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:22.008373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:22.018199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:22.018298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:22.018322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:22.018336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:22.018354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:22.018383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:22.028207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:22.028298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:22.028321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:22.028335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:22.028348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:22.028375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:22.038247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:22.038358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:22.038383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:22.038398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:22.038410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:22.038437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:22.048274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:22.048361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:22.048384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:22.048399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:22.048411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:22.048438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:24.768 [2024-07-12 16:03:22.058327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.768 [2024-07-12 16:03:22.058440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.768 [2024-07-12 16:03:22.058467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.768 [2024-07-12 16:03:22.058485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.768 [2024-07-12 16:03:22.058508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:24.768 [2024-07-12 16:03:22.058552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.768 qpair failed and we were unable to recover it. 00:26:25.027 [2024-07-12 16:03:22.068403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.027 [2024-07-12 16:03:22.068497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.027 [2024-07-12 16:03:22.068524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.027 [2024-07-12 16:03:22.068540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.027 [2024-07-12 16:03:22.068552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.027 [2024-07-12 16:03:22.068582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.027 qpair failed and we were unable to recover it. 00:26:25.027 [2024-07-12 16:03:22.078403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.027 [2024-07-12 16:03:22.078498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.027 [2024-07-12 16:03:22.078522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.027 [2024-07-12 16:03:22.078537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.027 [2024-07-12 16:03:22.078549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.027 [2024-07-12 16:03:22.078586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.027 qpair failed and we were unable to recover it. 00:26:25.027 [2024-07-12 16:03:22.088387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.027 [2024-07-12 16:03:22.088474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.027 [2024-07-12 16:03:22.088499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.027 [2024-07-12 16:03:22.088513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.027 [2024-07-12 16:03:22.088525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.027 [2024-07-12 16:03:22.088554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.027 qpair failed and we were unable to recover it. 00:26:25.027 [2024-07-12 16:03:22.098395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.027 [2024-07-12 16:03:22.098486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.027 [2024-07-12 16:03:22.098510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.027 [2024-07-12 16:03:22.098524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.098536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.098565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.108475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.108608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.108631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.108647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.108665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.108694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.118503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.118639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.118663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.118678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.118691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.118733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.128478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.128568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.128592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.128606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.128619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.128647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.138575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.138662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.138686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.138700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.138712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.138763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.148565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.148694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.148732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.148756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.148769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.148799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.158592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.158697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.158735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.158758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.158771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.158801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.168606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.168689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.168715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.168753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.168768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.168798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.178662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.178820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.178846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.178862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.178874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.178903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.188779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.188872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.188897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.188912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.188925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.188954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.198757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.198870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.198895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.198916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.198929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.198958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.208755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.208853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.208876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.208890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.208902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.208931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.218793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.218884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.218908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.218923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.218936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.218975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.228812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.228932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.228955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.228970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.228982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.028 [2024-07-12 16:03:22.229010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.028 qpair failed and we were unable to recover it. 00:26:25.028 [2024-07-12 16:03:22.238825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.028 [2024-07-12 16:03:22.238915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.028 [2024-07-12 16:03:22.238938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.028 [2024-07-12 16:03:22.238953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.028 [2024-07-12 16:03:22.238966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.029 [2024-07-12 16:03:22.238994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.029 qpair failed and we were unable to recover it. 00:26:25.029 [2024-07-12 16:03:22.248842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.029 [2024-07-12 16:03:22.248982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.029 [2024-07-12 16:03:22.249007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.029 [2024-07-12 16:03:22.249022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.029 [2024-07-12 16:03:22.249051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.029 [2024-07-12 16:03:22.249079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.029 qpair failed and we were unable to recover it. 00:26:25.029 [2024-07-12 16:03:22.258895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.029 [2024-07-12 16:03:22.259032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.029 [2024-07-12 16:03:22.259071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.029 [2024-07-12 16:03:22.259086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.029 [2024-07-12 16:03:22.259099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.029 [2024-07-12 16:03:22.259127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.029 qpair failed and we were unable to recover it. 00:26:25.029 [2024-07-12 16:03:22.268928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.029 [2024-07-12 16:03:22.269015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.029 [2024-07-12 16:03:22.269053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.029 [2024-07-12 16:03:22.269068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.029 [2024-07-12 16:03:22.269080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.029 [2024-07-12 16:03:22.269109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.029 qpair failed and we were unable to recover it. 00:26:25.029 [2024-07-12 16:03:22.278947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.029 [2024-07-12 16:03:22.279053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.029 [2024-07-12 16:03:22.279076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.029 [2024-07-12 16:03:22.279090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.029 [2024-07-12 16:03:22.279104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.029 [2024-07-12 16:03:22.279131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.029 qpair failed and we were unable to recover it. 00:26:25.029 [2024-07-12 16:03:22.288966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.029 [2024-07-12 16:03:22.289100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.029 [2024-07-12 16:03:22.289124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.029 [2024-07-12 16:03:22.289144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.029 [2024-07-12 16:03:22.289157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.029 [2024-07-12 16:03:22.289185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.029 qpair failed and we were unable to recover it. 00:26:25.029 [2024-07-12 16:03:22.299016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.029 [2024-07-12 16:03:22.299148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.029 [2024-07-12 16:03:22.299172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.029 [2024-07-12 16:03:22.299187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.029 [2024-07-12 16:03:22.299200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.029 [2024-07-12 16:03:22.299228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.029 qpair failed and we were unable to recover it. 00:26:25.029 [2024-07-12 16:03:22.309066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.029 [2024-07-12 16:03:22.309177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.029 [2024-07-12 16:03:22.309200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.029 [2024-07-12 16:03:22.309214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.029 [2024-07-12 16:03:22.309227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.029 [2024-07-12 16:03:22.309255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.029 qpair failed and we were unable to recover it. 00:26:25.029 [2024-07-12 16:03:22.319080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.029 [2024-07-12 16:03:22.319171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.029 [2024-07-12 16:03:22.319199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.029 [2024-07-12 16:03:22.319215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.029 [2024-07-12 16:03:22.319228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.029 [2024-07-12 16:03:22.319264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.029 qpair failed and we were unable to recover it. 00:26:25.289 [2024-07-12 16:03:22.329079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.289 [2024-07-12 16:03:22.329176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.289 [2024-07-12 16:03:22.329202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.289 [2024-07-12 16:03:22.329217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.289 [2024-07-12 16:03:22.329230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.289 [2024-07-12 16:03:22.329259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.289 qpair failed and we were unable to recover it. 00:26:25.289 [2024-07-12 16:03:22.339110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.289 [2024-07-12 16:03:22.339206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.289 [2024-07-12 16:03:22.339231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.289 [2024-07-12 16:03:22.339245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.289 [2024-07-12 16:03:22.339257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.289 [2024-07-12 16:03:22.339286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.289 qpair failed and we were unable to recover it. 00:26:25.289 [2024-07-12 16:03:22.349187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.289 [2024-07-12 16:03:22.349274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.289 [2024-07-12 16:03:22.349299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.289 [2024-07-12 16:03:22.349313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.289 [2024-07-12 16:03:22.349326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.289 [2024-07-12 16:03:22.349354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.289 qpair failed and we were unable to recover it. 00:26:25.289 [2024-07-12 16:03:22.359221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.289 [2024-07-12 16:03:22.359307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.289 [2024-07-12 16:03:22.359331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.289 [2024-07-12 16:03:22.359346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.289 [2024-07-12 16:03:22.359359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.289 [2024-07-12 16:03:22.359387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.289 qpair failed and we were unable to recover it. 00:26:25.289 [2024-07-12 16:03:22.369210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.289 [2024-07-12 16:03:22.369298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.289 [2024-07-12 16:03:22.369322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.289 [2024-07-12 16:03:22.369336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.289 [2024-07-12 16:03:22.369348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.289 [2024-07-12 16:03:22.369377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.289 qpair failed and we were unable to recover it. 00:26:25.289 [2024-07-12 16:03:22.379239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.289 [2024-07-12 16:03:22.379333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.289 [2024-07-12 16:03:22.379357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.289 [2024-07-12 16:03:22.379379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.289 [2024-07-12 16:03:22.379393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.289 [2024-07-12 16:03:22.379421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.289 qpair failed and we were unable to recover it. 00:26:25.289 [2024-07-12 16:03:22.389315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.289 [2024-07-12 16:03:22.389430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.289 [2024-07-12 16:03:22.389455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.289 [2024-07-12 16:03:22.389469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.289 [2024-07-12 16:03:22.389483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.289 [2024-07-12 16:03:22.389512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.289 qpair failed and we were unable to recover it. 00:26:25.289 [2024-07-12 16:03:22.399297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.289 [2024-07-12 16:03:22.399386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.289 [2024-07-12 16:03:22.399410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.289 [2024-07-12 16:03:22.399425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.289 [2024-07-12 16:03:22.399437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.289 [2024-07-12 16:03:22.399465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.289 qpair failed and we were unable to recover it. 00:26:25.289 [2024-07-12 16:03:22.409378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.289 [2024-07-12 16:03:22.409470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.289 [2024-07-12 16:03:22.409494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.289 [2024-07-12 16:03:22.409509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.289 [2024-07-12 16:03:22.409521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.289 [2024-07-12 16:03:22.409549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.289 qpair failed and we were unable to recover it. 00:26:25.289 [2024-07-12 16:03:22.419362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.289 [2024-07-12 16:03:22.419453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.289 [2024-07-12 16:03:22.419478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.289 [2024-07-12 16:03:22.419507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.289 [2024-07-12 16:03:22.419520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.289 [2024-07-12 16:03:22.419549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.289 qpair failed and we were unable to recover it. 00:26:25.289 [2024-07-12 16:03:22.429462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.289 [2024-07-12 16:03:22.429561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.289 [2024-07-12 16:03:22.429585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.289 [2024-07-12 16:03:22.429599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.289 [2024-07-12 16:03:22.429612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.289 [2024-07-12 16:03:22.429640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.289 qpair failed and we were unable to recover it. 00:26:25.289 [2024-07-12 16:03:22.439445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.289 [2024-07-12 16:03:22.439536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.439560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.439574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.439587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.439615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.449473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.449559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.449583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.449597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.449609] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.449637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.459553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.459677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.459702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.459717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.459729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.459782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.469477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.469572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.469600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.469616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.469628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.469656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.479531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.479622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.479646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.479661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.479673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.479701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.489596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.489685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.489709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.489752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.489768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.489798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.499611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.499767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.499792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.499807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.499820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.499848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.509548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.509633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.509656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.509671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.509684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.509716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.519572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.519667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.519693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.519708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.519744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.519776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.529603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.529694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.529733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.529757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.529771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.529800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.539663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.539776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.539800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.539815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.539828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.539856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.549775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.549874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.549899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.549914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.549927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.549956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.559755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.559850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.559880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.559895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.559908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.559937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.569808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.569921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.569948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.569963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.569976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.290 [2024-07-12 16:03:22.570004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.290 qpair failed and we were unable to recover it. 00:26:25.290 [2024-07-12 16:03:22.579882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.290 [2024-07-12 16:03:22.580009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.290 [2024-07-12 16:03:22.580051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.290 [2024-07-12 16:03:22.580067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.290 [2024-07-12 16:03:22.580081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.291 [2024-07-12 16:03:22.580110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.291 qpair failed and we were unable to recover it. 00:26:25.550 [2024-07-12 16:03:22.589845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.550 [2024-07-12 16:03:22.589992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.550 [2024-07-12 16:03:22.590020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.550 [2024-07-12 16:03:22.590036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.550 [2024-07-12 16:03:22.590064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.550 [2024-07-12 16:03:22.590095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.550 qpair failed and we were unable to recover it. 00:26:25.550 [2024-07-12 16:03:22.599822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.550 [2024-07-12 16:03:22.599921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.550 [2024-07-12 16:03:22.599946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.550 [2024-07-12 16:03:22.599962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.550 [2024-07-12 16:03:22.599975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.550 [2024-07-12 16:03:22.600009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.550 qpair failed and we were unable to recover it. 00:26:25.550 [2024-07-12 16:03:22.609872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.550 [2024-07-12 16:03:22.609961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.550 [2024-07-12 16:03:22.609985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.550 [2024-07-12 16:03:22.610001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.550 [2024-07-12 16:03:22.610014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.550 [2024-07-12 16:03:22.610044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.550 qpair failed and we were unable to recover it. 00:26:25.550 [2024-07-12 16:03:22.619909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.550 [2024-07-12 16:03:22.620007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.550 [2024-07-12 16:03:22.620032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.550 [2024-07-12 16:03:22.620062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.550 [2024-07-12 16:03:22.620075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.550 [2024-07-12 16:03:22.620104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.550 qpair failed and we were unable to recover it. 00:26:25.550 [2024-07-12 16:03:22.630060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.550 [2024-07-12 16:03:22.630146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.550 [2024-07-12 16:03:22.630169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.550 [2024-07-12 16:03:22.630183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.550 [2024-07-12 16:03:22.630196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.550 [2024-07-12 16:03:22.630223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.550 qpair failed and we were unable to recover it. 00:26:25.550 [2024-07-12 16:03:22.639946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.550 [2024-07-12 16:03:22.640032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.550 [2024-07-12 16:03:22.640071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.550 [2024-07-12 16:03:22.640086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.550 [2024-07-12 16:03:22.640099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.550 [2024-07-12 16:03:22.640127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.550 qpair failed and we were unable to recover it. 00:26:25.550 [2024-07-12 16:03:22.650005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.550 [2024-07-12 16:03:22.650124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.550 [2024-07-12 16:03:22.650155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.550 [2024-07-12 16:03:22.650170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.550 [2024-07-12 16:03:22.650183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.550 [2024-07-12 16:03:22.650211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.550 qpair failed and we were unable to recover it. 00:26:25.550 [2024-07-12 16:03:22.660079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.550 [2024-07-12 16:03:22.660170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.550 [2024-07-12 16:03:22.660194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.550 [2024-07-12 16:03:22.660209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.550 [2024-07-12 16:03:22.660221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.550 [2024-07-12 16:03:22.660250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.550 qpair failed and we were unable to recover it. 00:26:25.550 [2024-07-12 16:03:22.670087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.550 [2024-07-12 16:03:22.670173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.550 [2024-07-12 16:03:22.670197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.550 [2024-07-12 16:03:22.670212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.550 [2024-07-12 16:03:22.670224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.550 [2024-07-12 16:03:22.670252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.550 qpair failed and we were unable to recover it. 00:26:25.550 [2024-07-12 16:03:22.680140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.550 [2024-07-12 16:03:22.680244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.550 [2024-07-12 16:03:22.680268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.550 [2024-07-12 16:03:22.680282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.550 [2024-07-12 16:03:22.680295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.550 [2024-07-12 16:03:22.680323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.550 qpair failed and we were unable to recover it. 00:26:25.550 [2024-07-12 16:03:22.690099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.550 [2024-07-12 16:03:22.690185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.690209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.690223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.690235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.690268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.700154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.700247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.700271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.700286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.700298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.700326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.710199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.710285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.710309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.710324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.710336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.710363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.720239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.720342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.720366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.720380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.720393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.720420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.730206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.730293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.730318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.730333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.730346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.730373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.740237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.740327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.740356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.740371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.740383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.740411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.750309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.750427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.750450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.750464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.750477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.750505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.760285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.760366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.760390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.760404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.760416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.760443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.770333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.770417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.770441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.770455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.770468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.770495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.780409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.780502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.780525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.780540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.780557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.780586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.790366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.790464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.790488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.790503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.790516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.790543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.800472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.800597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.800622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.800637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.800649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.800677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.810468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.810584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.810609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.810624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.810636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.810664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.820447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.820538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.820563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.820577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.820589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.551 [2024-07-12 16:03:22.820618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.551 qpair failed and we were unable to recover it. 00:26:25.551 [2024-07-12 16:03:22.830468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.551 [2024-07-12 16:03:22.830558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.551 [2024-07-12 16:03:22.830584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.551 [2024-07-12 16:03:22.830600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.551 [2024-07-12 16:03:22.830612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.552 [2024-07-12 16:03:22.830640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.552 qpair failed and we were unable to recover it. 00:26:25.552 [2024-07-12 16:03:22.840541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.552 [2024-07-12 16:03:22.840637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.552 [2024-07-12 16:03:22.840664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.552 [2024-07-12 16:03:22.840679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.552 [2024-07-12 16:03:22.840693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.552 [2024-07-12 16:03:22.840722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.552 qpair failed and we were unable to recover it. 00:26:25.810 [2024-07-12 16:03:22.850634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.810 [2024-07-12 16:03:22.850746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.810 [2024-07-12 16:03:22.850774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.810 [2024-07-12 16:03:22.850790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.810 [2024-07-12 16:03:22.850803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.810 [2024-07-12 16:03:22.850833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.810 qpair failed and we were unable to recover it. 00:26:25.810 [2024-07-12 16:03:22.860586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.810 [2024-07-12 16:03:22.860678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.810 [2024-07-12 16:03:22.860702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.810 [2024-07-12 16:03:22.860731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.810 [2024-07-12 16:03:22.860757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.810 [2024-07-12 16:03:22.860788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.810 qpair failed and we were unable to recover it. 00:26:25.810 [2024-07-12 16:03:22.870672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.810 [2024-07-12 16:03:22.870780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.810 [2024-07-12 16:03:22.870806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.810 [2024-07-12 16:03:22.870822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.810 [2024-07-12 16:03:22.870840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.810 [2024-07-12 16:03:22.870869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.810 qpair failed and we were unable to recover it. 00:26:25.810 [2024-07-12 16:03:22.880632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.810 [2024-07-12 16:03:22.880743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.810 [2024-07-12 16:03:22.880768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.810 [2024-07-12 16:03:22.880783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.810 [2024-07-12 16:03:22.880797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.810 [2024-07-12 16:03:22.880826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:22.890665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:22.890781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:22.890806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:22.890821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:22.890834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:22.890863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:22.900793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:22.900882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:22.900906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:22.900921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:22.900934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:22.900963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:22.910763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:22.910851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:22.910875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:22.910889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:22.910903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:22.910941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:22.920791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:22.920889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:22.920913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:22.920928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:22.920941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:22.920970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:22.930811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:22.930904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:22.930929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:22.930944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:22.930957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:22.930986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:22.940857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:22.940979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:22.941003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:22.941033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:22.941047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:22.941074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:22.950881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:22.950971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:22.950996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:22.951011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:22.951037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:22.951066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:22.960865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:22.960953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:22.960978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:22.960998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:22.961011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:22.961040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:22.970954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:22.971043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:22.971080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:22.971095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:22.971108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:22.971137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:22.980978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:22.981128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:22.981154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:22.981178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:22.981196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:22.981226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:22.990953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:22.991064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:22.991087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:22.991102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:22.991115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:22.991142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:23.001003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:23.001102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:23.001125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:23.001140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:23.001152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:23.001180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:23.011080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:23.011180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:23.011204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:23.011219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:23.011231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:23.011259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:23.021076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:23.021165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.811 [2024-07-12 16:03:23.021189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.811 [2024-07-12 16:03:23.021204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.811 [2024-07-12 16:03:23.021216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.811 [2024-07-12 16:03:23.021244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.811 qpair failed and we were unable to recover it. 00:26:25.811 [2024-07-12 16:03:23.031158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.811 [2024-07-12 16:03:23.031276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.812 [2024-07-12 16:03:23.031300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.812 [2024-07-12 16:03:23.031315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.812 [2024-07-12 16:03:23.031328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.812 [2024-07-12 16:03:23.031355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.812 qpair failed and we were unable to recover it. 00:26:25.812 [2024-07-12 16:03:23.041142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.812 [2024-07-12 16:03:23.041226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.812 [2024-07-12 16:03:23.041249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.812 [2024-07-12 16:03:23.041263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.812 [2024-07-12 16:03:23.041276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.812 [2024-07-12 16:03:23.041304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.812 qpair failed and we were unable to recover it. 00:26:25.812 [2024-07-12 16:03:23.051178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.812 [2024-07-12 16:03:23.051263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.812 [2024-07-12 16:03:23.051286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.812 [2024-07-12 16:03:23.051305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.812 [2024-07-12 16:03:23.051318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.812 [2024-07-12 16:03:23.051347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.812 qpair failed and we were unable to recover it. 00:26:25.812 [2024-07-12 16:03:23.061246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.812 [2024-07-12 16:03:23.061335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.812 [2024-07-12 16:03:23.061358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.812 [2024-07-12 16:03:23.061373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.812 [2024-07-12 16:03:23.061386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.812 [2024-07-12 16:03:23.061414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.812 qpair failed and we were unable to recover it. 00:26:25.812 [2024-07-12 16:03:23.071232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.812 [2024-07-12 16:03:23.071321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.812 [2024-07-12 16:03:23.071344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.812 [2024-07-12 16:03:23.071358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.812 [2024-07-12 16:03:23.071371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.812 [2024-07-12 16:03:23.071400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.812 qpair failed and we were unable to recover it. 00:26:25.812 [2024-07-12 16:03:23.081302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.812 [2024-07-12 16:03:23.081439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.812 [2024-07-12 16:03:23.081463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.812 [2024-07-12 16:03:23.081477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.812 [2024-07-12 16:03:23.081490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.812 [2024-07-12 16:03:23.081518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.812 qpair failed and we were unable to recover it. 00:26:25.812 [2024-07-12 16:03:23.091280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.812 [2024-07-12 16:03:23.091372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.812 [2024-07-12 16:03:23.091396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.812 [2024-07-12 16:03:23.091410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.812 [2024-07-12 16:03:23.091423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.812 [2024-07-12 16:03:23.091450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.812 qpair failed and we were unable to recover it. 00:26:25.812 [2024-07-12 16:03:23.101398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:25.812 [2024-07-12 16:03:23.101526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:25.812 [2024-07-12 16:03:23.101554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:25.812 [2024-07-12 16:03:23.101571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:25.812 [2024-07-12 16:03:23.101584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:25.812 [2024-07-12 16:03:23.101615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:25.812 qpair failed and we were unable to recover it. 00:26:26.070 [2024-07-12 16:03:23.111399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.070 [2024-07-12 16:03:23.111531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.070 [2024-07-12 16:03:23.111559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.070 [2024-07-12 16:03:23.111574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.070 [2024-07-12 16:03:23.111587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.070 [2024-07-12 16:03:23.111616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.070 qpair failed and we were unable to recover it. 00:26:26.070 [2024-07-12 16:03:23.121325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.070 [2024-07-12 16:03:23.121421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.070 [2024-07-12 16:03:23.121445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.070 [2024-07-12 16:03:23.121460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.070 [2024-07-12 16:03:23.121473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.070 [2024-07-12 16:03:23.121501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.070 qpair failed and we were unable to recover it. 00:26:26.070 [2024-07-12 16:03:23.131361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.070 [2024-07-12 16:03:23.131486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.070 [2024-07-12 16:03:23.131512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.070 [2024-07-12 16:03:23.131527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.070 [2024-07-12 16:03:23.131540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.070 [2024-07-12 16:03:23.131568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.070 qpair failed and we were unable to recover it. 00:26:26.070 [2024-07-12 16:03:23.141401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.141493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.141516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.141538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.141552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.141579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.151405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.151490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.151515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.151530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.151542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.151570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.161424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.161552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.161575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.161590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.161602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.161632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.171461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.171546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.171572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.171587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.171599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.171627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.181498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.181623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.181648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.181664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.181676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.181704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.191515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.191602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.191627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.191642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.191654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.191682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.201542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.201640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.201665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.201680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.201693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.201736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.211555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.211638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.211661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.211676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.211688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.211716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.221622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.221711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.221759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.221776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.221788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.221817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.231631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.231717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.231754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.231771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.231785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.231814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.241654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.241760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.241786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.241802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.241814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.241843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.251684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.251792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.251818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.251833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.251846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.251875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.261825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.261954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.261981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.261996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.262008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.262052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.271749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.271844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.271869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.271884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.071 [2024-07-12 16:03:23.271896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.071 [2024-07-12 16:03:23.271925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.071 qpair failed and we were unable to recover it. 00:26:26.071 [2024-07-12 16:03:23.281787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.071 [2024-07-12 16:03:23.281874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.071 [2024-07-12 16:03:23.281899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.071 [2024-07-12 16:03:23.281914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.072 [2024-07-12 16:03:23.281927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.072 [2024-07-12 16:03:23.281956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.072 qpair failed and we were unable to recover it. 00:26:26.072 [2024-07-12 16:03:23.291814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.072 [2024-07-12 16:03:23.291900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.072 [2024-07-12 16:03:23.291927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.072 [2024-07-12 16:03:23.291942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.072 [2024-07-12 16:03:23.291955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.072 [2024-07-12 16:03:23.291983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.072 qpair failed and we were unable to recover it. 00:26:26.072 [2024-07-12 16:03:23.301952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.072 [2024-07-12 16:03:23.302080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.072 [2024-07-12 16:03:23.302105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.072 [2024-07-12 16:03:23.302120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.072 [2024-07-12 16:03:23.302132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.072 [2024-07-12 16:03:23.302160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.072 qpair failed and we were unable to recover it. 00:26:26.072 [2024-07-12 16:03:23.311859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.072 [2024-07-12 16:03:23.311953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.072 [2024-07-12 16:03:23.311978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.072 [2024-07-12 16:03:23.311992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.072 [2024-07-12 16:03:23.312005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.072 [2024-07-12 16:03:23.312048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.072 qpair failed and we were unable to recover it. 00:26:26.072 [2024-07-12 16:03:23.321988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.072 [2024-07-12 16:03:23.322090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.072 [2024-07-12 16:03:23.322120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.072 [2024-07-12 16:03:23.322136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.072 [2024-07-12 16:03:23.322148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.072 [2024-07-12 16:03:23.322175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.072 qpair failed and we were unable to recover it. 00:26:26.072 [2024-07-12 16:03:23.331924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.072 [2024-07-12 16:03:23.332034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.072 [2024-07-12 16:03:23.332057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.072 [2024-07-12 16:03:23.332072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.072 [2024-07-12 16:03:23.332084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.072 [2024-07-12 16:03:23.332112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.072 qpair failed and we were unable to recover it. 00:26:26.072 [2024-07-12 16:03:23.342044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.072 [2024-07-12 16:03:23.342172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.072 [2024-07-12 16:03:23.342197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.072 [2024-07-12 16:03:23.342213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.072 [2024-07-12 16:03:23.342226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.072 [2024-07-12 16:03:23.342253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.072 qpair failed and we were unable to recover it. 00:26:26.072 [2024-07-12 16:03:23.351982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.072 [2024-07-12 16:03:23.352067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.072 [2024-07-12 16:03:23.352106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.072 [2024-07-12 16:03:23.352120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.072 [2024-07-12 16:03:23.352133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.072 [2024-07-12 16:03:23.352161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.072 qpair failed and we were unable to recover it. 00:26:26.072 [2024-07-12 16:03:23.361996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.072 [2024-07-12 16:03:23.362101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.072 [2024-07-12 16:03:23.362127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.072 [2024-07-12 16:03:23.362142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.072 [2024-07-12 16:03:23.362155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.072 [2024-07-12 16:03:23.362189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.072 qpair failed and we were unable to recover it. 00:26:26.331 [2024-07-12 16:03:23.372094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.331 [2024-07-12 16:03:23.372197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.331 [2024-07-12 16:03:23.372224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.331 [2024-07-12 16:03:23.372239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.331 [2024-07-12 16:03:23.372252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.331 [2024-07-12 16:03:23.372280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.331 qpair failed and we were unable to recover it. 00:26:26.331 [2024-07-12 16:03:23.382083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.331 [2024-07-12 16:03:23.382183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.331 [2024-07-12 16:03:23.382207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.331 [2024-07-12 16:03:23.382221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.331 [2024-07-12 16:03:23.382234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.331 [2024-07-12 16:03:23.382263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.331 qpair failed and we were unable to recover it. 00:26:26.331 [2024-07-12 16:03:23.392090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.331 [2024-07-12 16:03:23.392173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.331 [2024-07-12 16:03:23.392197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.331 [2024-07-12 16:03:23.392211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.331 [2024-07-12 16:03:23.392224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.331 [2024-07-12 16:03:23.392252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.331 qpair failed and we were unable to recover it. 00:26:26.331 [2024-07-12 16:03:23.402116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.331 [2024-07-12 16:03:23.402217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.331 [2024-07-12 16:03:23.402243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.331 [2024-07-12 16:03:23.402259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.331 [2024-07-12 16:03:23.402271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.331 [2024-07-12 16:03:23.402299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.331 qpair failed and we were unable to recover it. 00:26:26.331 [2024-07-12 16:03:23.412144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.331 [2024-07-12 16:03:23.412227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.331 [2024-07-12 16:03:23.412256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.331 [2024-07-12 16:03:23.412271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.331 [2024-07-12 16:03:23.412284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.331 [2024-07-12 16:03:23.412312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.331 qpair failed and we were unable to recover it. 00:26:26.331 [2024-07-12 16:03:23.422229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.331 [2024-07-12 16:03:23.422348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.331 [2024-07-12 16:03:23.422373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.331 [2024-07-12 16:03:23.422388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.331 [2024-07-12 16:03:23.422401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.331 [2024-07-12 16:03:23.422428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.331 qpair failed and we were unable to recover it. 00:26:26.331 [2024-07-12 16:03:23.432212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.331 [2024-07-12 16:03:23.432296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.331 [2024-07-12 16:03:23.432320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.331 [2024-07-12 16:03:23.432334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.331 [2024-07-12 16:03:23.432347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.331 [2024-07-12 16:03:23.432375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.331 qpair failed and we were unable to recover it. 00:26:26.331 [2024-07-12 16:03:23.442238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.331 [2024-07-12 16:03:23.442332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.331 [2024-07-12 16:03:23.442356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.331 [2024-07-12 16:03:23.442370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.331 [2024-07-12 16:03:23.442382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.331 [2024-07-12 16:03:23.442410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.331 qpair failed and we were unable to recover it. 00:26:26.331 [2024-07-12 16:03:23.452263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.331 [2024-07-12 16:03:23.452347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.331 [2024-07-12 16:03:23.452371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.331 [2024-07-12 16:03:23.452386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.331 [2024-07-12 16:03:23.452399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.331 [2024-07-12 16:03:23.452431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.331 qpair failed and we were unable to recover it. 00:26:26.331 [2024-07-12 16:03:23.462316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.331 [2024-07-12 16:03:23.462408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.331 [2024-07-12 16:03:23.462431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.331 [2024-07-12 16:03:23.462446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.331 [2024-07-12 16:03:23.462458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.331 [2024-07-12 16:03:23.462485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.331 qpair failed and we were unable to recover it. 00:26:26.331 [2024-07-12 16:03:23.472402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.331 [2024-07-12 16:03:23.472488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.331 [2024-07-12 16:03:23.472511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.331 [2024-07-12 16:03:23.472525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.331 [2024-07-12 16:03:23.472538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.331 [2024-07-12 16:03:23.472565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.482390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.482501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.482527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.482541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.482553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.482581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.492437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.492523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.492547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.492561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.492573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.492601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.502419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.502524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.502554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.502570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.502583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.502611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.512422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.512515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.512539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.512554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.512566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.512593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.522447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.522535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.522559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.522574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.522586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.522614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.532470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.532558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.532582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.532596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.532608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.532637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.542541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.542638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.542662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.542677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.542695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.542745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.552543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.552632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.552655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.552669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.552682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.552710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.562561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.562647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.562670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.562683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.562696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.562746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.572586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.572682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.572706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.572720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.572732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.572785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.582641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.582756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.582783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.582798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.582811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.582841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.592669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.592784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.592809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.592824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.592837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.592866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.602677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.602789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.602813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.602828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.602841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.602870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.612767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.612855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.332 [2024-07-12 16:03:23.612880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.332 [2024-07-12 16:03:23.612895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.332 [2024-07-12 16:03:23.612907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.332 [2024-07-12 16:03:23.612936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.332 qpair failed and we were unable to recover it. 00:26:26.332 [2024-07-12 16:03:23.622791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.332 [2024-07-12 16:03:23.622896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.333 [2024-07-12 16:03:23.622925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.333 [2024-07-12 16:03:23.622941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.333 [2024-07-12 16:03:23.622954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.333 [2024-07-12 16:03:23.622990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.333 qpair failed and we were unable to recover it. 00:26:26.591 [2024-07-12 16:03:23.632796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.591 [2024-07-12 16:03:23.632937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.591 [2024-07-12 16:03:23.632966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.591 [2024-07-12 16:03:23.632982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.591 [2024-07-12 16:03:23.633000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.591 [2024-07-12 16:03:23.633045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.591 qpair failed and we were unable to recover it. 00:26:26.591 [2024-07-12 16:03:23.642843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.591 [2024-07-12 16:03:23.642951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.591 [2024-07-12 16:03:23.642978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.591 [2024-07-12 16:03:23.642994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.643006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.643036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.652851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.652952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.652977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.652992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.653005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.653033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.662884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.662978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.663002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.663017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.663029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.663057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.672907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.672995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.673020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.673035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.673064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.673093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.682925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.683022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.683046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.683061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.683074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.683103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.692982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.693097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.693122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.693136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.693148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.693177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.703076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.703171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.703194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.703209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.703222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.703250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.713110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.713204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.713229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.713244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.713256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.713284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.723097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.723262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.723287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.723302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.723327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.723354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.733083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.733173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.733196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.733210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.733222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.733249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.743180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.743278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.743302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.743316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.743328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.743356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.753152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.753290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.753316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.753331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.753343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.753372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.763166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.763252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.763278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.763293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.763305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.763333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.773231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.773325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.773349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.773364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.773377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.773404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.592 [2024-07-12 16:03:23.783310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.592 [2024-07-12 16:03:23.783443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.592 [2024-07-12 16:03:23.783468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.592 [2024-07-12 16:03:23.783482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.592 [2024-07-12 16:03:23.783494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.592 [2024-07-12 16:03:23.783522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.592 qpair failed and we were unable to recover it. 00:26:26.593 [2024-07-12 16:03:23.793266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.593 [2024-07-12 16:03:23.793359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.593 [2024-07-12 16:03:23.793383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.593 [2024-07-12 16:03:23.793397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.593 [2024-07-12 16:03:23.793409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.593 [2024-07-12 16:03:23.793436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.593 qpair failed and we were unable to recover it. 00:26:26.593 [2024-07-12 16:03:23.803312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.593 [2024-07-12 16:03:23.803403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.593 [2024-07-12 16:03:23.803427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.593 [2024-07-12 16:03:23.803441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.593 [2024-07-12 16:03:23.803453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.593 [2024-07-12 16:03:23.803481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.593 qpair failed and we were unable to recover it. 00:26:26.593 [2024-07-12 16:03:23.813362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.593 [2024-07-12 16:03:23.813450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.593 [2024-07-12 16:03:23.813473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.593 [2024-07-12 16:03:23.813493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.593 [2024-07-12 16:03:23.813505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.593 [2024-07-12 16:03:23.813533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.593 qpair failed and we were unable to recover it. 00:26:26.593 [2024-07-12 16:03:23.823428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.593 [2024-07-12 16:03:23.823524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.593 [2024-07-12 16:03:23.823548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.593 [2024-07-12 16:03:23.823563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.593 [2024-07-12 16:03:23.823575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.593 [2024-07-12 16:03:23.823613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.593 qpair failed and we were unable to recover it. 00:26:26.593 [2024-07-12 16:03:23.833402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.593 [2024-07-12 16:03:23.833490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.593 [2024-07-12 16:03:23.833515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.593 [2024-07-12 16:03:23.833531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.593 [2024-07-12 16:03:23.833543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.593 [2024-07-12 16:03:23.833570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.593 qpair failed and we were unable to recover it. 00:26:26.593 [2024-07-12 16:03:23.843471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.593 [2024-07-12 16:03:23.843555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.593 [2024-07-12 16:03:23.843578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.593 [2024-07-12 16:03:23.843592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.593 [2024-07-12 16:03:23.843605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.593 [2024-07-12 16:03:23.843633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.593 qpair failed and we were unable to recover it. 00:26:26.593 [2024-07-12 16:03:23.853455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.593 [2024-07-12 16:03:23.853540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.593 [2024-07-12 16:03:23.853564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.593 [2024-07-12 16:03:23.853579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.593 [2024-07-12 16:03:23.853592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.593 [2024-07-12 16:03:23.853619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.593 qpair failed and we were unable to recover it. 00:26:26.593 [2024-07-12 16:03:23.863500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.593 [2024-07-12 16:03:23.863621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.593 [2024-07-12 16:03:23.863646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.593 [2024-07-12 16:03:23.863661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.593 [2024-07-12 16:03:23.863674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.593 [2024-07-12 16:03:23.863701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.593 qpair failed and we were unable to recover it. 00:26:26.593 [2024-07-12 16:03:23.873512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.593 [2024-07-12 16:03:23.873633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.593 [2024-07-12 16:03:23.873659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.593 [2024-07-12 16:03:23.873673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.593 [2024-07-12 16:03:23.873686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.593 [2024-07-12 16:03:23.873713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.593 qpair failed and we were unable to recover it. 00:26:26.593 [2024-07-12 16:03:23.883556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.593 [2024-07-12 16:03:23.883667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.593 [2024-07-12 16:03:23.883701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.593 [2024-07-12 16:03:23.883720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.593 [2024-07-12 16:03:23.883733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.593 [2024-07-12 16:03:23.883773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.593 qpair failed and we were unable to recover it. 00:26:26.852 [2024-07-12 16:03:23.893620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.852 [2024-07-12 16:03:23.893742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.852 [2024-07-12 16:03:23.893769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.852 [2024-07-12 16:03:23.893784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.852 [2024-07-12 16:03:23.893797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.852 [2024-07-12 16:03:23.893827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.852 qpair failed and we were unable to recover it. 00:26:26.852 [2024-07-12 16:03:23.903603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.852 [2024-07-12 16:03:23.903699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.852 [2024-07-12 16:03:23.903724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.852 [2024-07-12 16:03:23.903770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.852 [2024-07-12 16:03:23.903785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.852 [2024-07-12 16:03:23.903815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.852 qpair failed and we were unable to recover it. 00:26:26.852 [2024-07-12 16:03:23.913669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.852 [2024-07-12 16:03:23.913813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.852 [2024-07-12 16:03:23.913839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.852 [2024-07-12 16:03:23.913855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.852 [2024-07-12 16:03:23.913867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.852 [2024-07-12 16:03:23.913896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.852 qpair failed and we were unable to recover it. 00:26:26.852 [2024-07-12 16:03:23.923601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.852 [2024-07-12 16:03:23.923701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.852 [2024-07-12 16:03:23.923725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.852 [2024-07-12 16:03:23.923762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.852 [2024-07-12 16:03:23.923777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.852 [2024-07-12 16:03:23.923806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.852 qpair failed and we were unable to recover it. 00:26:26.852 [2024-07-12 16:03:23.933637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.852 [2024-07-12 16:03:23.933725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.852 [2024-07-12 16:03:23.933773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.852 [2024-07-12 16:03:23.933789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.852 [2024-07-12 16:03:23.933802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.852 [2024-07-12 16:03:23.933832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.852 qpair failed and we were unable to recover it. 00:26:26.852 [2024-07-12 16:03:23.943682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.852 [2024-07-12 16:03:23.943800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.852 [2024-07-12 16:03:23.943827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.852 [2024-07-12 16:03:23.943842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.852 [2024-07-12 16:03:23.943854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.852 [2024-07-12 16:03:23.943883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.852 qpair failed and we were unable to recover it. 00:26:26.852 [2024-07-12 16:03:23.953796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.852 [2024-07-12 16:03:23.953887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.852 [2024-07-12 16:03:23.953913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.852 [2024-07-12 16:03:23.953929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.852 [2024-07-12 16:03:23.953941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.852 [2024-07-12 16:03:23.953969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.852 qpair failed and we were unable to recover it. 00:26:26.852 [2024-07-12 16:03:23.963775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.852 [2024-07-12 16:03:23.963866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:23.963890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:23.963905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:23.963917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:23.963946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:23.973764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:23.973853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:23.973878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:23.973893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:23.973906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:23.973935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:23.983798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:23.983905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:23.983932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:23.983947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:23.983959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:23.983988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:23.993849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:23.993943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:23.993974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:23.993990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:23.994002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:23.994031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:24.003878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:24.003964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:24.003988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:24.004003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:24.004017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:24.004060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:24.013885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:24.014029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:24.014069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:24.014084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:24.014097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:24.014136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:24.023964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:24.024072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:24.024097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:24.024111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:24.024124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:24.024152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:24.033961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:24.034071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:24.034096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:24.034110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:24.034123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:24.034151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:24.044053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:24.044139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:24.044163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:24.044177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:24.044189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:24.044216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:24.053987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:24.054088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:24.054111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:24.054125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:24.054137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:24.054165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:24.064102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:24.064221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:24.064244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:24.064259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:24.064272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:24.064300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:24.074125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:24.074246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:24.074270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:24.074285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:24.074297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:24.074325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:24.084111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:24.084228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:24.084256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:24.084272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:24.084284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:24.084312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:24.094129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:24.094219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:24.094242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:24.094257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:24.094270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:24.094298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.853 qpair failed and we were unable to recover it. 00:26:26.853 [2024-07-12 16:03:24.104225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.853 [2024-07-12 16:03:24.104316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.853 [2024-07-12 16:03:24.104339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.853 [2024-07-12 16:03:24.104354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.853 [2024-07-12 16:03:24.104366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.853 [2024-07-12 16:03:24.104394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.854 qpair failed and we were unable to recover it. 00:26:26.854 [2024-07-12 16:03:24.114207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.854 [2024-07-12 16:03:24.114296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.854 [2024-07-12 16:03:24.114319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.854 [2024-07-12 16:03:24.114333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.854 [2024-07-12 16:03:24.114346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.854 [2024-07-12 16:03:24.114377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.854 qpair failed and we were unable to recover it. 00:26:26.854 [2024-07-12 16:03:24.124223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.854 [2024-07-12 16:03:24.124317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.854 [2024-07-12 16:03:24.124341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.854 [2024-07-12 16:03:24.124356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.854 [2024-07-12 16:03:24.124369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.854 [2024-07-12 16:03:24.124402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.854 qpair failed and we were unable to recover it. 00:26:26.854 [2024-07-12 16:03:24.134291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.854 [2024-07-12 16:03:24.134377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.854 [2024-07-12 16:03:24.134401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.854 [2024-07-12 16:03:24.134415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.854 [2024-07-12 16:03:24.134427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.854 [2024-07-12 16:03:24.134455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.854 qpair failed and we were unable to recover it. 00:26:26.854 [2024-07-12 16:03:24.144316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:26.854 [2024-07-12 16:03:24.144498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:26.854 [2024-07-12 16:03:24.144537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:26.854 [2024-07-12 16:03:24.144581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:26.854 [2024-07-12 16:03:24.144599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:26.854 [2024-07-12 16:03:24.144630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:26.854 qpair failed and we were unable to recover it. 00:26:27.113 [2024-07-12 16:03:24.154288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.113 [2024-07-12 16:03:24.154380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.113 [2024-07-12 16:03:24.154407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.113 [2024-07-12 16:03:24.154423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.113 [2024-07-12 16:03:24.154435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.113 [2024-07-12 16:03:24.154464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.113 qpair failed and we were unable to recover it. 00:26:27.113 [2024-07-12 16:03:24.164336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.113 [2024-07-12 16:03:24.164421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.113 [2024-07-12 16:03:24.164445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.113 [2024-07-12 16:03:24.164461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.113 [2024-07-12 16:03:24.164474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.113 [2024-07-12 16:03:24.164502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.113 qpair failed and we were unable to recover it. 00:26:27.113 [2024-07-12 16:03:24.174411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.113 [2024-07-12 16:03:24.174499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.113 [2024-07-12 16:03:24.174528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.113 [2024-07-12 16:03:24.174543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.113 [2024-07-12 16:03:24.174555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.113 [2024-07-12 16:03:24.174584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.113 qpair failed and we were unable to recover it. 00:26:27.113 [2024-07-12 16:03:24.184379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.113 [2024-07-12 16:03:24.184477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.113 [2024-07-12 16:03:24.184501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.113 [2024-07-12 16:03:24.184516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.113 [2024-07-12 16:03:24.184529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.113 [2024-07-12 16:03:24.184556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.113 qpair failed and we were unable to recover it. 00:26:27.113 [2024-07-12 16:03:24.194457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.113 [2024-07-12 16:03:24.194544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.113 [2024-07-12 16:03:24.194568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.113 [2024-07-12 16:03:24.194582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.113 [2024-07-12 16:03:24.194595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.113 [2024-07-12 16:03:24.194623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.113 qpair failed and we were unable to recover it. 00:26:27.113 [2024-07-12 16:03:24.204527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.113 [2024-07-12 16:03:24.204657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.113 [2024-07-12 16:03:24.204682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.113 [2024-07-12 16:03:24.204697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.113 [2024-07-12 16:03:24.204710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.113 [2024-07-12 16:03:24.204762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.113 qpair failed and we were unable to recover it. 00:26:27.113 [2024-07-12 16:03:24.214500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.113 [2024-07-12 16:03:24.214609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.113 [2024-07-12 16:03:24.214634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.113 [2024-07-12 16:03:24.214648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.113 [2024-07-12 16:03:24.214660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.113 [2024-07-12 16:03:24.214693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.113 qpair failed and we were unable to recover it. 00:26:27.113 [2024-07-12 16:03:24.224485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.113 [2024-07-12 16:03:24.224575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.113 [2024-07-12 16:03:24.224599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.113 [2024-07-12 16:03:24.224614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.113 [2024-07-12 16:03:24.224626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.113 [2024-07-12 16:03:24.224654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.113 qpair failed and we were unable to recover it. 00:26:27.113 [2024-07-12 16:03:24.234510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.113 [2024-07-12 16:03:24.234592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.113 [2024-07-12 16:03:24.234616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.113 [2024-07-12 16:03:24.234630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.113 [2024-07-12 16:03:24.234642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.113 [2024-07-12 16:03:24.234670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.113 qpair failed and we were unable to recover it. 00:26:27.113 [2024-07-12 16:03:24.244602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.113 [2024-07-12 16:03:24.244692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.113 [2024-07-12 16:03:24.244730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.113 [2024-07-12 16:03:24.244754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.113 [2024-07-12 16:03:24.244768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.113 [2024-07-12 16:03:24.244797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.113 qpair failed and we were unable to recover it. 00:26:27.113 [2024-07-12 16:03:24.254571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.113 [2024-07-12 16:03:24.254706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.113 [2024-07-12 16:03:24.254755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.113 [2024-07-12 16:03:24.254772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.113 [2024-07-12 16:03:24.254785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.113 [2024-07-12 16:03:24.254814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.113 qpair failed and we were unable to recover it. 00:26:27.113 [2024-07-12 16:03:24.264608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.113 [2024-07-12 16:03:24.264697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.113 [2024-07-12 16:03:24.264726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.113 [2024-07-12 16:03:24.264761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.113 [2024-07-12 16:03:24.264777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.264806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.274683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.274795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.274820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.274834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.274848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.274877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.284656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.284784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.284809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.284824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.284837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.284866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.294688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.294825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.294850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.294865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.294878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.294906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.304758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.304859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.304883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.304898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.304924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.304956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.314743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.314839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.314863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.314878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.314891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.314919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.324829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.324957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.324981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.324997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.325010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.325039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.334809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.334938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.334965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.334980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.335006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.335034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.344856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.344949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.344973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.344989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.345001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.345030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.354872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.354973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.354997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.355012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.355025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.355068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.364909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.364996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.365020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.365034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.365062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.365090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.374894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.374979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.375003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.375018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.375031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.375059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.385014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.385149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.385173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.385187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.385200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.385228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.395011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.395115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.395139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.395153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.395171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.395210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.114 [2024-07-12 16:03:24.405084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.114 [2024-07-12 16:03:24.405194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.114 [2024-07-12 16:03:24.405230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.114 [2024-07-12 16:03:24.405257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.114 [2024-07-12 16:03:24.405282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.114 [2024-07-12 16:03:24.405327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.114 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.415086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.415184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.415210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.415226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.415238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.415268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.425190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.425287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.425311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.425327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.425340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.425370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.435109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.435221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.435245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.435260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.435272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.435301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.445136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.445239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.445263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.445278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.445290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.445319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.455179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.455266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.455291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.455305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.455318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.455346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.465228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.465347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.465370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.465386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.465398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.465426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.475284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.475421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.475445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.475460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.475472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.475499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.485260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.485360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.485385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.485400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.485418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.485447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.495302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.495399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.495422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.495437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.495449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.495477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.505391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.505519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.505543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.505558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.505570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.505598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.515332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.515422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.515446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.515460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.515473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.515501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.525369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.525455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.525478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.525493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.525506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.525533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.535394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.535477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.374 [2024-07-12 16:03:24.535500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.374 [2024-07-12 16:03:24.535515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.374 [2024-07-12 16:03:24.535527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.374 [2024-07-12 16:03:24.535555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.374 qpair failed and we were unable to recover it. 00:26:27.374 [2024-07-12 16:03:24.545454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.374 [2024-07-12 16:03:24.545587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.545610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.545626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.545639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.545666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.375 [2024-07-12 16:03:24.555448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.375 [2024-07-12 16:03:24.555551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.555575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.555590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.555602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.555630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.375 [2024-07-12 16:03:24.565549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.375 [2024-07-12 16:03:24.565673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.565697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.565712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.565747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.565778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.375 [2024-07-12 16:03:24.575519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.375 [2024-07-12 16:03:24.575603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.575627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.575647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.575660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.575688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.375 [2024-07-12 16:03:24.585518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.375 [2024-07-12 16:03:24.585608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.585632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.585647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.585659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.585687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.375 [2024-07-12 16:03:24.595543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.375 [2024-07-12 16:03:24.595641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.595665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.595680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.595692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.595743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.375 [2024-07-12 16:03:24.605644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.375 [2024-07-12 16:03:24.605774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.605798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.605813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.605826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.605856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.375 [2024-07-12 16:03:24.615597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.375 [2024-07-12 16:03:24.615685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.615709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.615745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.615760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.615790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.375 [2024-07-12 16:03:24.625659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.375 [2024-07-12 16:03:24.625777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.625802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.625818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.625831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.625860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.375 [2024-07-12 16:03:24.635645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.375 [2024-07-12 16:03:24.635755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.635780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.635796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.635808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.635837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.375 [2024-07-12 16:03:24.645671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.375 [2024-07-12 16:03:24.645782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.645807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.645822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.645835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.645864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.375 [2024-07-12 16:03:24.655700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.375 [2024-07-12 16:03:24.655814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.655840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.655855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.655868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.655897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.375 [2024-07-12 16:03:24.665834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.375 [2024-07-12 16:03:24.665991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.375 [2024-07-12 16:03:24.666028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.375 [2024-07-12 16:03:24.666068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.375 [2024-07-12 16:03:24.666090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.375 [2024-07-12 16:03:24.666122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.375 qpair failed and we were unable to recover it. 00:26:27.634 [2024-07-12 16:03:24.675782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.634 [2024-07-12 16:03:24.675919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.634 [2024-07-12 16:03:24.675946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.634 [2024-07-12 16:03:24.675962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.634 [2024-07-12 16:03:24.675975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.634 [2024-07-12 16:03:24.676005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-07-12 16:03:24.685847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.634 [2024-07-12 16:03:24.685941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.634 [2024-07-12 16:03:24.685966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.634 [2024-07-12 16:03:24.685981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.634 [2024-07-12 16:03:24.685994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.634 [2024-07-12 16:03:24.686025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-07-12 16:03:24.695837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.634 [2024-07-12 16:03:24.695924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.634 [2024-07-12 16:03:24.695949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.634 [2024-07-12 16:03:24.695964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.634 [2024-07-12 16:03:24.695977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.634 [2024-07-12 16:03:24.696005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-07-12 16:03:24.705846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.634 [2024-07-12 16:03:24.705955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.634 [2024-07-12 16:03:24.705980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.634 [2024-07-12 16:03:24.705995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.634 [2024-07-12 16:03:24.706008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.634 [2024-07-12 16:03:24.706053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-07-12 16:03:24.715936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.634 [2024-07-12 16:03:24.716025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.634 [2024-07-12 16:03:24.716064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.634 [2024-07-12 16:03:24.716078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.634 [2024-07-12 16:03:24.716091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.634 [2024-07-12 16:03:24.716119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-07-12 16:03:24.725980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.634 [2024-07-12 16:03:24.726108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.634 [2024-07-12 16:03:24.726132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.634 [2024-07-12 16:03:24.726147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.634 [2024-07-12 16:03:24.726160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.634 [2024-07-12 16:03:24.726187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.634 qpair failed and we were unable to recover it. 00:26:27.634 [2024-07-12 16:03:24.735960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.736097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.736121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.736136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.736148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.736176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.746065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.746167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.746190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.746204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.746217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.746245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.756008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.756111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.756134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.756154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.756168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.756196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.766047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.766135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.766159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.766173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.766185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.766213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.776073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.776189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.776213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.776228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.776241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.776268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.786098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.786198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.786223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.786238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.786250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.786279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.796116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.796204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.796228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.796243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.796256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.796283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.806139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.806230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.806254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.806269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.806281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.806309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.816150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.816231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.816255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.816270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.816282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.816309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.826203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.826303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.826327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.826341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.826354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.826382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.836301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.836384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.836408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.836423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.836435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.836463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.846200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.846322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.846352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.846368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.846380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.846407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.856270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.856364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.856387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.856402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.856415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.856442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.866280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.866378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.866401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.635 [2024-07-12 16:03:24.866415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.635 [2024-07-12 16:03:24.866428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.635 [2024-07-12 16:03:24.866455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.635 qpair failed and we were unable to recover it. 00:26:27.635 [2024-07-12 16:03:24.876392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.635 [2024-07-12 16:03:24.876483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.635 [2024-07-12 16:03:24.876507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.636 [2024-07-12 16:03:24.876522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.636 [2024-07-12 16:03:24.876534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.636 [2024-07-12 16:03:24.876563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-07-12 16:03:24.886332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.636 [2024-07-12 16:03:24.886461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.636 [2024-07-12 16:03:24.886486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.636 [2024-07-12 16:03:24.886500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.636 [2024-07-12 16:03:24.886513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.636 [2024-07-12 16:03:24.886546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-07-12 16:03:24.896372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.636 [2024-07-12 16:03:24.896456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.636 [2024-07-12 16:03:24.896479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.636 [2024-07-12 16:03:24.896493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.636 [2024-07-12 16:03:24.896506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.636 [2024-07-12 16:03:24.896534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-07-12 16:03:24.906373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.636 [2024-07-12 16:03:24.906467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.636 [2024-07-12 16:03:24.906493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.636 [2024-07-12 16:03:24.906508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.636 [2024-07-12 16:03:24.906520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.636 [2024-07-12 16:03:24.906548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-07-12 16:03:24.916410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.636 [2024-07-12 16:03:24.916494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.636 [2024-07-12 16:03:24.916518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.636 [2024-07-12 16:03:24.916533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.636 [2024-07-12 16:03:24.916546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.636 [2024-07-12 16:03:24.916573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.636 [2024-07-12 16:03:24.926462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.636 [2024-07-12 16:03:24.926569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.636 [2024-07-12 16:03:24.926605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.636 [2024-07-12 16:03:24.926633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.636 [2024-07-12 16:03:24.926656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.636 [2024-07-12 16:03:24.926700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.636 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:24.936500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:24.936600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:24.936630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:24.936645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:24.936659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:24.936687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:24.946538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:24.946634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:24.946658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:24.946673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:24.946685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:24.946714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:24.956541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:24.956629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:24.956654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:24.956669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:24.956681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:24.956709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:24.966564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:24.966656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:24.966680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:24.966694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:24.966707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:24.966759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:24.976607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:24.976701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:24.976750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:24.976767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:24.976780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:24.976813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:24.986604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:24.986698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:24.986722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:24.986742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:24.986771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:24.986802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:24.996760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:24.996868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:24.996893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:24.996908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:24.996921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:24.996950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:25.006784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:25.006883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:25.006908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:25.006922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:25.006935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:25.006964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:25.016685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:25.016808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:25.016835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:25.016850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:25.016863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:25.016892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:25.026794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:25.026888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:25.026917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:25.026933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:25.026946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:25.026975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:25.036810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:25.036950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:25.036977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:25.036992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:25.037005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:25.037049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:25.046846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:25.046939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:25.046965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:25.046981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:25.046994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:25.047022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:25.056859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:25.056947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:25.056972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:25.056992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:25.057006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:25.057049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:25.066890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:25.067053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:25.067078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:25.067094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:25.067108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:25.067142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.895 [2024-07-12 16:03:25.076857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.895 [2024-07-12 16:03:25.076961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.895 [2024-07-12 16:03:25.076987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.895 [2024-07-12 16:03:25.077002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.895 [2024-07-12 16:03:25.077014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.895 [2024-07-12 16:03:25.077044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.895 qpair failed and we were unable to recover it. 00:26:27.896 [2024-07-12 16:03:25.087000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.896 [2024-07-12 16:03:25.087144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.896 [2024-07-12 16:03:25.087171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.896 [2024-07-12 16:03:25.087186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.896 [2024-07-12 16:03:25.087198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.896 [2024-07-12 16:03:25.087225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.896 qpair failed and we were unable to recover it. 00:26:27.896 [2024-07-12 16:03:25.097034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.896 [2024-07-12 16:03:25.097117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.896 [2024-07-12 16:03:25.097141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.896 [2024-07-12 16:03:25.097155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.896 [2024-07-12 16:03:25.097168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.896 [2024-07-12 16:03:25.097194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.896 qpair failed and we were unable to recover it. 00:26:27.896 [2024-07-12 16:03:25.107076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.896 [2024-07-12 16:03:25.107165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.896 [2024-07-12 16:03:25.107191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.896 [2024-07-12 16:03:25.107206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.896 [2024-07-12 16:03:25.107218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.896 [2024-07-12 16:03:25.107246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.896 qpair failed and we were unable to recover it. 00:26:27.896 [2024-07-12 16:03:25.116981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.896 [2024-07-12 16:03:25.117081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.896 [2024-07-12 16:03:25.117110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.896 [2024-07-12 16:03:25.117125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.896 [2024-07-12 16:03:25.117153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.896 [2024-07-12 16:03:25.117182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.896 qpair failed and we were unable to recover it. 00:26:27.896 [2024-07-12 16:03:25.127015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.896 [2024-07-12 16:03:25.127133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.896 [2024-07-12 16:03:25.127158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.896 [2024-07-12 16:03:25.127172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.896 [2024-07-12 16:03:25.127185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.896 [2024-07-12 16:03:25.127213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.896 qpair failed and we were unable to recover it. 00:26:27.896 [2024-07-12 16:03:25.137057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.896 [2024-07-12 16:03:25.137192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.896 [2024-07-12 16:03:25.137218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.896 [2024-07-12 16:03:25.137232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.896 [2024-07-12 16:03:25.137245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.896 [2024-07-12 16:03:25.137273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.896 qpair failed and we were unable to recover it. 00:26:27.896 [2024-07-12 16:03:25.147090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.896 [2024-07-12 16:03:25.147179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.896 [2024-07-12 16:03:25.147203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.896 [2024-07-12 16:03:25.147217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.896 [2024-07-12 16:03:25.147230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.896 [2024-07-12 16:03:25.147257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.896 qpair failed and we were unable to recover it. 00:26:27.896 [2024-07-12 16:03:25.157107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.896 [2024-07-12 16:03:25.157250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.896 [2024-07-12 16:03:25.157275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.896 [2024-07-12 16:03:25.157290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.896 [2024-07-12 16:03:25.157308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.896 [2024-07-12 16:03:25.157336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.896 qpair failed and we were unable to recover it. 00:26:27.896 [2024-07-12 16:03:25.167213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.896 [2024-07-12 16:03:25.167293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.896 [2024-07-12 16:03:25.167317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.896 [2024-07-12 16:03:25.167331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.896 [2024-07-12 16:03:25.167343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.896 [2024-07-12 16:03:25.167370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.896 qpair failed and we were unable to recover it. 00:26:27.896 [2024-07-12 16:03:25.177170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:27.896 [2024-07-12 16:03:25.177263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:27.896 [2024-07-12 16:03:25.177286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:27.896 [2024-07-12 16:03:25.177301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.896 [2024-07-12 16:03:25.177313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:27.896 [2024-07-12 16:03:25.177341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:27.896 qpair failed and we were unable to recover it. 00:26:27.896 [2024-07-12 16:03:25.187224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.155 [2024-07-12 16:03:25.187340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.155 [2024-07-12 16:03:25.187369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.155 [2024-07-12 16:03:25.187385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.155 [2024-07-12 16:03:25.187398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.155 [2024-07-12 16:03:25.187427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.155 qpair failed and we were unable to recover it. 00:26:28.155 [2024-07-12 16:03:25.197250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.155 [2024-07-12 16:03:25.197345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.155 [2024-07-12 16:03:25.197370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.155 [2024-07-12 16:03:25.197384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.155 [2024-07-12 16:03:25.197397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.155 [2024-07-12 16:03:25.197425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.155 qpair failed and we were unable to recover it. 00:26:28.155 [2024-07-12 16:03:25.207254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.155 [2024-07-12 16:03:25.207349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.155 [2024-07-12 16:03:25.207373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.155 [2024-07-12 16:03:25.207388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.155 [2024-07-12 16:03:25.207400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.155 [2024-07-12 16:03:25.207429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.155 qpair failed and we were unable to recover it. 00:26:28.155 [2024-07-12 16:03:25.217325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.155 [2024-07-12 16:03:25.217415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.155 [2024-07-12 16:03:25.217439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.155 [2024-07-12 16:03:25.217453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.155 [2024-07-12 16:03:25.217465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.155 [2024-07-12 16:03:25.217492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.155 qpair failed and we were unable to recover it. 00:26:28.155 [2024-07-12 16:03:25.227304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.155 [2024-07-12 16:03:25.227392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.155 [2024-07-12 16:03:25.227416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.155 [2024-07-12 16:03:25.227430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.155 [2024-07-12 16:03:25.227442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.155 [2024-07-12 16:03:25.227470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.155 qpair failed and we were unable to recover it. 00:26:28.155 [2024-07-12 16:03:25.237317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.155 [2024-07-12 16:03:25.237404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.155 [2024-07-12 16:03:25.237427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.155 [2024-07-12 16:03:25.237442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.155 [2024-07-12 16:03:25.237454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.155 [2024-07-12 16:03:25.237481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.155 qpair failed and we were unable to recover it. 00:26:28.155 [2024-07-12 16:03:25.247334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.155 [2024-07-12 16:03:25.247429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.155 [2024-07-12 16:03:25.247453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.155 [2024-07-12 16:03:25.247467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.155 [2024-07-12 16:03:25.247484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.155 [2024-07-12 16:03:25.247513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.155 qpair failed and we were unable to recover it. 00:26:28.155 [2024-07-12 16:03:25.257389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.155 [2024-07-12 16:03:25.257476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.155 [2024-07-12 16:03:25.257502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.155 [2024-07-12 16:03:25.257516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.155 [2024-07-12 16:03:25.257529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.155 [2024-07-12 16:03:25.257557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.155 qpair failed and we were unable to recover it. 00:26:28.155 [2024-07-12 16:03:25.267423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.155 [2024-07-12 16:03:25.267514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.155 [2024-07-12 16:03:25.267538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.155 [2024-07-12 16:03:25.267553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.155 [2024-07-12 16:03:25.267566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.155 [2024-07-12 16:03:25.267593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.155 qpair failed and we were unable to recover it. 00:26:28.155 [2024-07-12 16:03:25.277429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.155 [2024-07-12 16:03:25.277517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.156 [2024-07-12 16:03:25.277541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.156 [2024-07-12 16:03:25.277557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.156 [2024-07-12 16:03:25.277569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.156 [2024-07-12 16:03:25.277597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.156 qpair failed and we were unable to recover it. 00:26:28.156 [2024-07-12 16:03:25.287472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.156 [2024-07-12 16:03:25.287562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.156 [2024-07-12 16:03:25.287586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.156 [2024-07-12 16:03:25.287600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.156 [2024-07-12 16:03:25.287613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.156 [2024-07-12 16:03:25.287641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.156 qpair failed and we were unable to recover it. 00:26:28.156 [2024-07-12 16:03:25.297487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.156 [2024-07-12 16:03:25.297606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.156 [2024-07-12 16:03:25.297632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.156 [2024-07-12 16:03:25.297647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.156 [2024-07-12 16:03:25.297659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.156 [2024-07-12 16:03:25.297686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.156 qpair failed and we were unable to recover it. 00:26:28.156 [2024-07-12 16:03:25.307588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.156 [2024-07-12 16:03:25.307698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.156 [2024-07-12 16:03:25.307747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.156 [2024-07-12 16:03:25.307764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.156 [2024-07-12 16:03:25.307777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.156 [2024-07-12 16:03:25.307807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.156 qpair failed and we were unable to recover it. 00:26:28.156 [2024-07-12 16:03:25.317640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.156 [2024-07-12 16:03:25.317726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.156 [2024-07-12 16:03:25.317760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.156 [2024-07-12 16:03:25.317775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.156 [2024-07-12 16:03:25.317788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.156 [2024-07-12 16:03:25.317817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.156 qpair failed and we were unable to recover it. 00:26:28.156 [2024-07-12 16:03:25.327565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.156 [2024-07-12 16:03:25.327662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.156 [2024-07-12 16:03:25.327686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.156 [2024-07-12 16:03:25.327701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.156 [2024-07-12 16:03:25.327713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.156 [2024-07-12 16:03:25.327764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.156 qpair failed and we were unable to recover it. 00:26:28.156 [2024-07-12 16:03:25.337584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.156 [2024-07-12 16:03:25.337679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.156 [2024-07-12 16:03:25.337702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.156 [2024-07-12 16:03:25.337743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.156 [2024-07-12 16:03:25.337760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.156 [2024-07-12 16:03:25.337790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.156 qpair failed and we were unable to recover it. 00:26:28.156 [2024-07-12 16:03:25.347681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.156 [2024-07-12 16:03:25.347837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.156 [2024-07-12 16:03:25.347864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.156 [2024-07-12 16:03:25.347880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.156 [2024-07-12 16:03:25.347892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.156 [2024-07-12 16:03:25.347921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.156 qpair failed and we were unable to recover it. 00:26:28.156 [2024-07-12 16:03:25.357647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.156 [2024-07-12 16:03:25.357762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.156 [2024-07-12 16:03:25.357788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.156 [2024-07-12 16:03:25.357805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.156 [2024-07-12 16:03:25.357817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.156 [2024-07-12 16:03:25.357847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.156 qpair failed and we were unable to recover it. 00:26:28.156 [2024-07-12 16:03:25.367684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.156 [2024-07-12 16:03:25.367785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.156 [2024-07-12 16:03:25.367811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.156 [2024-07-12 16:03:25.367826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.156 [2024-07-12 16:03:25.367839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.156 [2024-07-12 16:03:25.367868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.156 qpair failed and we were unable to recover it. 00:26:28.156 [2024-07-12 16:03:25.377749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.156 [2024-07-12 16:03:25.377849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.156 [2024-07-12 16:03:25.377874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.156 [2024-07-12 16:03:25.377889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.156 [2024-07-12 16:03:25.377902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.156 [2024-07-12 16:03:25.377931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.157 qpair failed and we were unable to recover it. 00:26:28.157 [2024-07-12 16:03:25.387785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.157 [2024-07-12 16:03:25.387881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.157 [2024-07-12 16:03:25.387907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.157 [2024-07-12 16:03:25.387922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.157 [2024-07-12 16:03:25.387934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.157 [2024-07-12 16:03:25.387963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.157 qpair failed and we were unable to recover it. 00:26:28.157 [2024-07-12 16:03:25.397787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.157 [2024-07-12 16:03:25.397878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.157 [2024-07-12 16:03:25.397904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.157 [2024-07-12 16:03:25.397919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.157 [2024-07-12 16:03:25.397932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.157 [2024-07-12 16:03:25.397961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.157 qpair failed and we were unable to recover it. 00:26:28.157 [2024-07-12 16:03:25.407866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.157 [2024-07-12 16:03:25.407960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.157 [2024-07-12 16:03:25.407985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.157 [2024-07-12 16:03:25.408000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.157 [2024-07-12 16:03:25.408013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.157 [2024-07-12 16:03:25.408056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.157 qpair failed and we were unable to recover it. 00:26:28.157 [2024-07-12 16:03:25.417864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.157 [2024-07-12 16:03:25.417960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.157 [2024-07-12 16:03:25.417985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.157 [2024-07-12 16:03:25.418000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.157 [2024-07-12 16:03:25.418013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.157 [2024-07-12 16:03:25.418042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.157 qpair failed and we were unable to recover it. 00:26:28.157 [2024-07-12 16:03:25.427916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:28.157 [2024-07-12 16:03:25.428009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:28.157 [2024-07-12 16:03:25.428048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:28.157 [2024-07-12 16:03:25.428070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:28.157 [2024-07-12 16:03:25.428084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13cc1e0 00:26:28.157 [2024-07-12 16:03:25.428111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:28.157 qpair failed and we were unable to recover it. 00:26:28.157 [2024-07-12 16:03:25.428252] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:26:28.157 A controller has encountered a failure and is being reset. 00:26:28.157 [2024-07-12 16:03:25.428310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0c80 (9): Bad file descriptor 00:26:28.414 Controller properly reset. 00:26:28.414 Initializing NVMe Controllers 00:26:28.414 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:28.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:28.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:28.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:28.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:28.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:28.414 Initialization complete. Launching workers. 00:26:28.414 Starting thread on core 1 00:26:28.414 Starting thread on core 2 00:26:28.414 Starting thread on core 3 00:26:28.414 Starting thread on core 0 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:28.414 00:26:28.414 real 0m10.920s 00:26:28.414 user 0m18.948s 00:26:28.414 sys 0m5.682s 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:28.414 ************************************ 00:26:28.414 END TEST nvmf_target_disconnect_tc2 00:26:28.414 ************************************ 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:28.414 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:28.414 rmmod nvme_tcp 00:26:28.414 rmmod nvme_fabrics 00:26:28.414 rmmod nvme_keyring 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 864874 ']' 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 864874 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 864874 ']' 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 864874 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 864874 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 864874' 00:26:28.415 killing process with pid 864874 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 864874 00:26:28.415 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 864874 00:26:28.981 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:28.981 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:28.981 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:28.981 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:28.981 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:28.981 16:03:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.981 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.981 16:03:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.881 16:03:28 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:30.881 00:26:30.881 real 0m15.803s 00:26:30.881 user 0m45.608s 00:26:30.881 sys 0m7.603s 00:26:30.881 16:03:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:30.881 16:03:28 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:30.881 ************************************ 00:26:30.881 END TEST nvmf_target_disconnect 00:26:30.881 ************************************ 00:26:30.881 16:03:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:30.881 16:03:28 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:26:30.881 16:03:28 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:30.882 16:03:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:30.882 16:03:28 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:26:30.882 00:26:30.882 real 19m18.029s 00:26:30.882 user 45m27.272s 00:26:30.882 sys 5m3.533s 00:26:30.882 16:03:28 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:30.882 16:03:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:30.882 ************************************ 00:26:30.882 END TEST nvmf_tcp 00:26:30.882 ************************************ 00:26:30.882 16:03:28 -- common/autotest_common.sh@1142 -- # return 0 00:26:30.882 16:03:28 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:26:30.882 16:03:28 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:30.882 16:03:28 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:30.882 16:03:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:30.882 16:03:28 -- common/autotest_common.sh@10 -- # set +x 00:26:30.882 ************************************ 00:26:30.882 START TEST spdkcli_nvmf_tcp 00:26:30.882 ************************************ 00:26:30.882 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:31.140 * Looking for test storage... 00:26:31.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.140 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=866070 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 866070 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 866070 ']' 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:31.141 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.141 [2024-07-12 16:03:28.253716] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:26:31.141 [2024-07-12 16:03:28.253836] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid866070 ] 00:26:31.141 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.141 [2024-07-12 16:03:28.313497] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:31.141 [2024-07-12 16:03:28.423462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.141 [2024-07-12 16:03:28.423465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.399 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:31.399 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:26:31.399 16:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:31.399 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:31.399 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.399 16:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:31.399 16:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:31.399 16:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:31.399 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:31.399 16:03:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:31.399 16:03:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:31.399 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:31.399 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:31.399 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:31.399 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:31.399 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:31.399 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:31.399 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:31.399 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:31.399 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:31.399 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:31.399 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:31.399 ' 00:26:33.925 [2024-07-12 16:03:31.054582] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.297 [2024-07-12 16:03:32.274877] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:37.820 [2024-07-12 16:03:34.533857] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:39.718 [2024-07-12 16:03:36.528027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:41.121 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:41.121 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:41.121 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:41.121 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:41.121 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:41.121 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:41.121 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:41.121 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:41.121 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:41.121 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:41.121 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:41.121 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:41.121 16:03:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:41.121 16:03:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:41.121 16:03:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.121 16:03:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:41.121 16:03:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:41.121 16:03:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.121 16:03:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:26:41.121 16:03:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:41.378 16:03:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:41.378 16:03:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:41.378 16:03:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:41.378 16:03:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:41.378 16:03:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.378 16:03:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:41.378 16:03:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:41.378 16:03:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.378 16:03:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:41.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:41.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:41.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:41.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:41.378 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:41.378 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:41.378 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:41.378 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:41.378 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:41.378 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:41.378 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:41.378 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:41.378 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:41.378 ' 00:26:46.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:46.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:46.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:46.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:46.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:46.634 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:46.634 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:46.634 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:46.634 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:46.634 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:46.634 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:46.634 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:46.634 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:46.634 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 866070 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 866070 ']' 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 866070 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 866070 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 866070' 00:26:46.634 killing process with pid 866070 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 866070 00:26:46.634 16:03:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 866070 00:26:46.891 16:03:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:46.891 16:03:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:46.891 16:03:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 866070 ']' 00:26:46.891 16:03:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 866070 00:26:46.891 16:03:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 866070 ']' 00:26:46.891 16:03:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 866070 00:26:46.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (866070) - No such process 00:26:46.891 16:03:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 866070 is not found' 00:26:46.891 Process with pid 866070 is not found 00:26:46.891 16:03:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:46.891 16:03:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:46.891 16:03:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:46.891 00:26:46.891 real 0m16.015s 00:26:46.891 user 0m33.749s 00:26:46.891 sys 0m0.826s 00:26:46.891 16:03:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:46.891 16:03:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:46.891 ************************************ 00:26:46.891 END TEST spdkcli_nvmf_tcp 00:26:46.891 ************************************ 00:26:46.891 16:03:44 -- common/autotest_common.sh@1142 -- # return 0 00:26:46.891 16:03:44 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:46.891 16:03:44 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:46.891 16:03:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:46.891 16:03:44 -- common/autotest_common.sh@10 -- # set +x 00:26:47.147 ************************************ 00:26:47.147 START TEST nvmf_identify_passthru 00:26:47.147 ************************************ 00:26:47.147 16:03:44 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:47.147 * Looking for test storage... 00:26:47.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:47.147 16:03:44 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.147 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.147 16:03:44 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.147 16:03:44 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.147 16:03:44 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.147 16:03:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.148 16:03:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.148 16:03:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.148 16:03:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:47.148 16:03:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:47.148 16:03:44 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.148 16:03:44 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.148 16:03:44 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.148 16:03:44 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.148 16:03:44 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.148 16:03:44 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.148 16:03:44 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.148 16:03:44 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:47.148 16:03:44 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.148 16:03:44 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.148 16:03:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:47.148 16:03:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:47.148 16:03:44 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:26:47.148 16:03:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:49.045 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:49.045 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:49.045 Found net devices under 0000:84:00.0: cvl_0_0 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:49.045 Found net devices under 0000:84:00.1: cvl_0_1 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.045 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:49.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:26:49.303 00:26:49.303 --- 10.0.0.2 ping statistics --- 00:26:49.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.303 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:26:49.303 00:26:49.303 --- 10.0.0.1 ping statistics --- 00:26:49.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.303 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:49.303 16:03:46 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:49.303 16:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:49.303 16:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:26:49.303 16:03:46 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:26:49.303 16:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:26:49.303 16:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:26:49.303 16:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:26:49.303 16:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:49.303 16:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:49.303 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.487 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:26:53.487 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:26:53.487 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:53.487 16:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:53.487 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.671 16:03:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:26:57.671 16:03:54 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:57.671 16:03:54 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:57.671 16:03:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:57.671 16:03:54 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:57.671 16:03:54 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:57.671 16:03:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:57.671 16:03:54 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=870717 00:26:57.671 16:03:54 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:57.671 16:03:54 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:57.671 16:03:54 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 870717 00:26:57.671 16:03:54 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 870717 ']' 00:26:57.671 16:03:54 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.671 16:03:54 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:57.671 16:03:54 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.671 16:03:54 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:57.672 16:03:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:57.930 [2024-07-12 16:03:54.979819] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:26:57.930 [2024-07-12 16:03:54.979913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.930 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.930 [2024-07-12 16:03:55.043417] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:57.930 [2024-07-12 16:03:55.152573] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.930 [2024-07-12 16:03:55.152650] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.930 [2024-07-12 16:03:55.152664] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:57.930 [2024-07-12 16:03:55.152689] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:57.930 [2024-07-12 16:03:55.152699] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.930 [2024-07-12 16:03:55.152789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.930 [2024-07-12 16:03:55.152856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:57.930 [2024-07-12 16:03:55.152903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:57.930 [2024-07-12 16:03:55.152906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.930 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:57.930 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:26:57.930 16:03:55 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:57.930 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.930 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:57.930 INFO: Log level set to 20 00:26:57.930 INFO: Requests: 00:26:57.930 { 00:26:57.930 "jsonrpc": "2.0", 00:26:57.930 "method": "nvmf_set_config", 00:26:57.930 "id": 1, 00:26:57.930 "params": { 00:26:57.930 "admin_cmd_passthru": { 00:26:57.930 "identify_ctrlr": true 00:26:57.930 } 00:26:57.930 } 00:26:57.930 } 00:26:57.930 00:26:57.930 INFO: response: 00:26:57.930 { 00:26:57.930 "jsonrpc": "2.0", 00:26:57.930 "id": 1, 00:26:57.930 "result": true 00:26:57.930 } 00:26:57.930 00:26:57.930 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.930 16:03:55 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:57.930 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.930 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:57.930 INFO: Setting log level to 20 00:26:57.930 INFO: Setting log level to 20 00:26:57.930 INFO: Log level set to 20 00:26:57.930 INFO: Log level set to 20 00:26:57.930 INFO: Requests: 00:26:57.930 { 00:26:57.930 "jsonrpc": "2.0", 00:26:57.930 "method": "framework_start_init", 00:26:57.930 "id": 1 00:26:57.930 } 00:26:57.930 00:26:57.930 INFO: Requests: 00:26:57.930 { 00:26:57.930 "jsonrpc": "2.0", 00:26:57.930 "method": "framework_start_init", 00:26:57.930 "id": 1 00:26:57.930 } 00:26:57.930 00:26:58.188 [2024-07-12 16:03:55.303161] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:58.188 INFO: response: 00:26:58.188 { 00:26:58.188 "jsonrpc": "2.0", 00:26:58.188 "id": 1, 00:26:58.188 "result": true 00:26:58.188 } 00:26:58.188 00:26:58.188 INFO: response: 00:26:58.188 { 00:26:58.189 "jsonrpc": "2.0", 00:26:58.189 "id": 1, 00:26:58.189 "result": true 00:26:58.189 } 00:26:58.189 00:26:58.189 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.189 16:03:55 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:58.189 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.189 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:58.189 INFO: Setting log level to 40 00:26:58.189 INFO: Setting log level to 40 00:26:58.189 INFO: Setting log level to 40 00:26:58.189 [2024-07-12 16:03:55.313352] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.189 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.189 16:03:55 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:58.189 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:58.189 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:58.189 16:03:55 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:26:58.189 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.189 16:03:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.468 Nvme0n1 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.468 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.468 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.468 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.468 [2024-07-12 16:03:58.210528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.468 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.468 [ 00:27:01.468 { 00:27:01.468 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:01.468 "subtype": "Discovery", 00:27:01.468 "listen_addresses": [], 00:27:01.468 "allow_any_host": true, 00:27:01.468 "hosts": [] 00:27:01.468 }, 00:27:01.468 { 00:27:01.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.468 "subtype": "NVMe", 00:27:01.468 "listen_addresses": [ 00:27:01.468 { 00:27:01.468 "trtype": "TCP", 00:27:01.468 "adrfam": "IPv4", 00:27:01.468 "traddr": "10.0.0.2", 00:27:01.468 "trsvcid": "4420" 00:27:01.468 } 00:27:01.468 ], 00:27:01.468 "allow_any_host": true, 00:27:01.468 "hosts": [], 00:27:01.468 "serial_number": "SPDK00000000000001", 00:27:01.468 "model_number": "SPDK bdev Controller", 00:27:01.468 "max_namespaces": 1, 00:27:01.468 "min_cntlid": 1, 00:27:01.468 "max_cntlid": 65519, 00:27:01.468 "namespaces": [ 00:27:01.468 { 00:27:01.468 "nsid": 1, 00:27:01.468 "bdev_name": "Nvme0n1", 00:27:01.468 "name": "Nvme0n1", 00:27:01.468 "nguid": "A626A3C8062E4CAC8AF50F3A4F203EF8", 00:27:01.468 "uuid": "a626a3c8-062e-4cac-8af5-0f3a4f203ef8" 00:27:01.468 } 00:27:01.468 ] 00:27:01.468 } 00:27:01.468 ] 00:27:01.468 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.468 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:01.468 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:01.468 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:01.468 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.468 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:27:01.468 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:01.468 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:01.468 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:01.468 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.468 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:01.469 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:27:01.469 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:01.469 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.469 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:01.469 16:03:58 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:01.469 16:03:58 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.469 16:03:58 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:01.469 16:03:58 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.469 16:03:58 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:01.469 16:03:58 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.469 16:03:58 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.469 rmmod nvme_tcp 00:27:01.469 rmmod nvme_fabrics 00:27:01.469 rmmod nvme_keyring 00:27:01.469 16:03:58 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.469 16:03:58 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:01.469 16:03:58 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:01.469 16:03:58 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 870717 ']' 00:27:01.469 16:03:58 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 870717 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 870717 ']' 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 870717 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 870717 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 870717' 00:27:01.469 killing process with pid 870717 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 870717 00:27:01.469 16:03:58 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 870717 00:27:03.367 16:04:00 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:03.367 16:04:00 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:03.367 16:04:00 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:03.367 16:04:00 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.367 16:04:00 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:03.367 16:04:00 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.367 16:04:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:03.367 16:04:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.268 16:04:02 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:05.268 00:27:05.268 real 0m18.006s 00:27:05.268 user 0m26.352s 00:27:05.268 sys 0m2.343s 00:27:05.268 16:04:02 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:05.268 16:04:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:05.268 ************************************ 00:27:05.268 END TEST nvmf_identify_passthru 00:27:05.268 ************************************ 00:27:05.268 16:04:02 -- common/autotest_common.sh@1142 -- # return 0 00:27:05.268 16:04:02 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:05.268 16:04:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:05.268 16:04:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.268 16:04:02 -- common/autotest_common.sh@10 -- # set +x 00:27:05.268 ************************************ 00:27:05.268 START TEST nvmf_dif 00:27:05.268 ************************************ 00:27:05.268 16:04:02 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:05.268 * Looking for test storage... 00:27:05.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:05.268 16:04:02 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.268 16:04:02 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.268 16:04:02 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.268 16:04:02 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.268 16:04:02 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.268 16:04:02 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.268 16:04:02 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.268 16:04:02 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:05.268 16:04:02 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:05.268 16:04:02 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:05.268 16:04:02 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:05.268 16:04:02 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:05.268 16:04:02 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:05.268 16:04:02 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.268 16:04:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:05.268 16:04:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:05.268 16:04:02 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:27:05.268 16:04:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:07.167 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.167 16:04:04 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:07.168 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:07.168 Found net devices under 0000:84:00.0: cvl_0_0 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:07.168 Found net devices under 0000:84:00.1: cvl_0_1 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:07.168 16:04:04 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:07.426 16:04:04 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:07.426 16:04:04 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:07.426 16:04:04 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:07.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:27:07.426 00:27:07.426 --- 10.0.0.2 ping statistics --- 00:27:07.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.426 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:27:07.426 16:04:04 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:07.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:27:07.426 00:27:07.426 --- 10.0.0.1 ping statistics --- 00:27:07.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.426 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:27:07.426 16:04:04 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.426 16:04:04 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:27:07.426 16:04:04 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:07.426 16:04:04 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:08.800 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:08.800 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:08.800 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:08.800 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:08.800 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:08.800 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:08.800 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:08.800 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:08.800 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:08.800 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:08.800 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:08.800 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:08.800 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:08.800 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:08.800 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:08.800 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:08.801 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:08.801 16:04:05 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.801 16:04:05 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:08.801 16:04:05 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:08.801 16:04:05 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.801 16:04:05 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:08.801 16:04:05 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:08.801 16:04:05 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:08.801 16:04:05 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:08.801 16:04:05 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:08.801 16:04:05 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:08.801 16:04:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:08.801 16:04:05 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=873889 00:27:08.801 16:04:05 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:08.801 16:04:05 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 873889 00:27:08.801 16:04:05 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 873889 ']' 00:27:08.801 16:04:05 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.801 16:04:05 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:08.801 16:04:05 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.801 16:04:05 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:08.801 16:04:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:08.801 [2024-07-12 16:04:05.955940] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:27:08.801 [2024-07-12 16:04:05.956012] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.801 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.801 [2024-07-12 16:04:06.018507] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.058 [2024-07-12 16:04:06.128311] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.058 [2024-07-12 16:04:06.128362] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.058 [2024-07-12 16:04:06.128376] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.058 [2024-07-12 16:04:06.128387] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.058 [2024-07-12 16:04:06.128397] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.058 [2024-07-12 16:04:06.128421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.058 16:04:06 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:09.058 16:04:06 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:27:09.058 16:04:06 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:09.058 16:04:06 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:09.058 16:04:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:09.058 16:04:06 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.058 16:04:06 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:09.058 16:04:06 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:09.058 16:04:06 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.058 16:04:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:09.058 [2024-07-12 16:04:06.264436] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.058 16:04:06 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.058 16:04:06 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:09.058 16:04:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:09.058 16:04:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.058 16:04:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:09.058 ************************************ 00:27:09.058 START TEST fio_dif_1_default 00:27:09.058 ************************************ 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:09.058 bdev_null0 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:09.058 [2024-07-12 16:04:06.324748] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:09.058 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:09.059 { 00:27:09.059 "params": { 00:27:09.059 "name": "Nvme$subsystem", 00:27:09.059 "trtype": "$TEST_TRANSPORT", 00:27:09.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:09.059 "adrfam": "ipv4", 00:27:09.059 "trsvcid": "$NVMF_PORT", 00:27:09.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:09.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:09.059 "hdgst": ${hdgst:-false}, 00:27:09.059 "ddgst": ${ddgst:-false} 00:27:09.059 }, 00:27:09.059 "method": "bdev_nvme_attach_controller" 00:27:09.059 } 00:27:09.059 EOF 00:27:09.059 )") 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:09.059 16:04:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:09.059 "params": { 00:27:09.059 "name": "Nvme0", 00:27:09.059 "trtype": "tcp", 00:27:09.059 "traddr": "10.0.0.2", 00:27:09.059 "adrfam": "ipv4", 00:27:09.059 "trsvcid": "4420", 00:27:09.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:09.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:09.059 "hdgst": false, 00:27:09.059 "ddgst": false 00:27:09.059 }, 00:27:09.059 "method": "bdev_nvme_attach_controller" 00:27:09.059 }' 00:27:09.316 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:09.316 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:09.316 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:09.316 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:09.316 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:09.316 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:09.316 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:09.316 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:09.316 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:09.316 16:04:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:09.316 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:09.316 fio-3.35 00:27:09.316 Starting 1 thread 00:27:09.573 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.762 00:27:21.762 filename0: (groupid=0, jobs=1): err= 0: pid=874114: Fri Jul 12 16:04:17 2024 00:27:21.762 read: IOPS=188, BW=753KiB/s (771kB/s)(7552KiB/10032msec) 00:27:21.762 slat (nsec): min=6906, max=85708, avg=9112.13, stdev=3589.07 00:27:21.762 clat (usec): min=477, max=46759, avg=21225.57, stdev=20491.01 00:27:21.762 lat (usec): min=484, max=46793, avg=21234.68, stdev=20490.88 00:27:21.762 clat percentiles (usec): 00:27:21.762 | 1.00th=[ 510], 5.00th=[ 537], 10.00th=[ 545], 20.00th=[ 562], 00:27:21.762 | 30.00th=[ 570], 40.00th=[ 586], 50.00th=[41157], 60.00th=[41157], 00:27:21.762 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:27:21.762 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:27:21.762 | 99.99th=[46924] 00:27:21.762 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=753.60, stdev=30.22, samples=20 00:27:21.762 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:27:21.762 lat (usec) : 500=0.42%, 750=49.15% 00:27:21.762 lat (msec) : 50=50.42% 00:27:21.762 cpu : usr=89.24%, sys=10.46%, ctx=18, majf=0, minf=377 00:27:21.762 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:21.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.762 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:21.762 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:21.762 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:21.762 00:27:21.762 Run status group 0 (all jobs): 00:27:21.762 READ: bw=753KiB/s (771kB/s), 753KiB/s-753KiB/s (771kB/s-771kB/s), io=7552KiB (7733kB), run=10032-10032msec 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.762 00:27:21.762 real 0m11.181s 00:27:21.762 user 0m10.169s 00:27:21.762 sys 0m1.311s 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:21.762 ************************************ 00:27:21.762 END TEST fio_dif_1_default 00:27:21.762 ************************************ 00:27:21.762 16:04:17 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:21.762 16:04:17 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:21.762 16:04:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:21.762 16:04:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.762 16:04:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:21.762 ************************************ 00:27:21.762 START TEST fio_dif_1_multi_subsystems 00:27:21.762 ************************************ 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:21.762 bdev_null0 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:21.762 [2024-07-12 16:04:17.556370] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:21.762 bdev_null1 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:21.762 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:21.763 { 00:27:21.763 "params": { 00:27:21.763 "name": "Nvme$subsystem", 00:27:21.763 "trtype": "$TEST_TRANSPORT", 00:27:21.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.763 "adrfam": "ipv4", 00:27:21.763 "trsvcid": "$NVMF_PORT", 00:27:21.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.763 "hdgst": ${hdgst:-false}, 00:27:21.763 "ddgst": ${ddgst:-false} 00:27:21.763 }, 00:27:21.763 "method": "bdev_nvme_attach_controller" 00:27:21.763 } 00:27:21.763 EOF 00:27:21.763 )") 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:21.763 { 00:27:21.763 "params": { 00:27:21.763 "name": "Nvme$subsystem", 00:27:21.763 "trtype": "$TEST_TRANSPORT", 00:27:21.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:21.763 "adrfam": "ipv4", 00:27:21.763 "trsvcid": "$NVMF_PORT", 00:27:21.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:21.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:21.763 "hdgst": ${hdgst:-false}, 00:27:21.763 "ddgst": ${ddgst:-false} 00:27:21.763 }, 00:27:21.763 "method": "bdev_nvme_attach_controller" 00:27:21.763 } 00:27:21.763 EOF 00:27:21.763 )") 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:21.763 "params": { 00:27:21.763 "name": "Nvme0", 00:27:21.763 "trtype": "tcp", 00:27:21.763 "traddr": "10.0.0.2", 00:27:21.763 "adrfam": "ipv4", 00:27:21.763 "trsvcid": "4420", 00:27:21.763 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:21.763 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:21.763 "hdgst": false, 00:27:21.763 "ddgst": false 00:27:21.763 }, 00:27:21.763 "method": "bdev_nvme_attach_controller" 00:27:21.763 },{ 00:27:21.763 "params": { 00:27:21.763 "name": "Nvme1", 00:27:21.763 "trtype": "tcp", 00:27:21.763 "traddr": "10.0.0.2", 00:27:21.763 "adrfam": "ipv4", 00:27:21.763 "trsvcid": "4420", 00:27:21.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:21.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:21.763 "hdgst": false, 00:27:21.763 "ddgst": false 00:27:21.763 }, 00:27:21.763 "method": "bdev_nvme_attach_controller" 00:27:21.763 }' 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:21.763 16:04:17 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:21.763 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:21.763 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:21.763 fio-3.35 00:27:21.763 Starting 2 threads 00:27:21.763 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.794 00:27:31.794 filename0: (groupid=0, jobs=1): err= 0: pid=875516: Fri Jul 12 16:04:28 2024 00:27:31.794 read: IOPS=188, BW=755KiB/s (773kB/s)(7552KiB/10004msec) 00:27:31.794 slat (nsec): min=4397, max=45232, avg=9513.53, stdev=3904.43 00:27:31.794 clat (usec): min=522, max=45116, avg=21164.25, stdev=20441.91 00:27:31.794 lat (usec): min=529, max=45131, avg=21173.76, stdev=20441.42 00:27:31.794 clat percentiles (usec): 00:27:31.794 | 1.00th=[ 537], 5.00th=[ 553], 10.00th=[ 570], 20.00th=[ 611], 00:27:31.794 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[40633], 60.00th=[41157], 00:27:31.794 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:27:31.794 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:27:31.794 | 99.99th=[45351] 00:27:31.794 bw ( KiB/s): min= 672, max= 768, per=66.23%, avg=756.21, stdev=28.64, samples=19 00:27:31.794 iops : min= 168, max= 192, avg=189.05, stdev= 7.16, samples=19 00:27:31.794 lat (usec) : 750=43.49%, 1000=6.30% 00:27:31.794 lat (msec) : 50=50.21% 00:27:31.794 cpu : usr=93.88%, sys=5.84%, ctx=13, majf=0, minf=165 00:27:31.794 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.794 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.794 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:31.794 filename1: (groupid=0, jobs=1): err= 0: pid=875517: Fri Jul 12 16:04:28 2024 00:27:31.794 read: IOPS=97, BW=389KiB/s (398kB/s)(3904KiB/10036msec) 00:27:31.794 slat (nsec): min=4210, max=31681, avg=10331.84, stdev=4639.05 00:27:31.794 clat (usec): min=40695, max=44097, avg=41096.23, stdev=382.74 00:27:31.794 lat (usec): min=40702, max=44109, avg=41106.56, stdev=383.26 00:27:31.794 clat percentiles (usec): 00:27:31.794 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:27:31.794 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:31.794 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:27:31.794 | 99.00th=[42206], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:27:31.794 | 99.99th=[44303] 00:27:31.794 bw ( KiB/s): min= 384, max= 416, per=33.99%, avg=388.80, stdev=11.72, samples=20 00:27:31.794 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:27:31.794 lat (msec) : 50=100.00% 00:27:31.794 cpu : usr=94.60%, sys=5.12%, ctx=16, majf=0, minf=144 00:27:31.794 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.794 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.794 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:31.794 00:27:31.794 Run status group 0 (all jobs): 00:27:31.794 READ: bw=1141KiB/s (1169kB/s), 389KiB/s-755KiB/s (398kB/s-773kB/s), io=11.2MiB (11.7MB), run=10004-10036msec 00:27:31.794 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:31.794 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:31.794 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:31.794 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:31.794 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:31.794 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.795 00:27:31.795 real 0m11.326s 00:27:31.795 user 0m20.101s 00:27:31.795 sys 0m1.409s 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.795 16:04:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:31.795 ************************************ 00:27:31.795 END TEST fio_dif_1_multi_subsystems 00:27:31.795 ************************************ 00:27:31.795 16:04:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:31.795 16:04:28 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:31.795 16:04:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:31.795 16:04:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.795 16:04:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:31.795 ************************************ 00:27:31.795 START TEST fio_dif_rand_params 00:27:31.795 ************************************ 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.795 bdev_null0 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:31.795 [2024-07-12 16:04:28.936871] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:31.795 { 00:27:31.795 "params": { 00:27:31.795 "name": "Nvme$subsystem", 00:27:31.795 "trtype": "$TEST_TRANSPORT", 00:27:31.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.795 "adrfam": "ipv4", 00:27:31.795 "trsvcid": "$NVMF_PORT", 00:27:31.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.795 "hdgst": ${hdgst:-false}, 00:27:31.795 "ddgst": ${ddgst:-false} 00:27:31.795 }, 00:27:31.795 "method": "bdev_nvme_attach_controller" 00:27:31.795 } 00:27:31.795 EOF 00:27:31.795 )") 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:31.795 "params": { 00:27:31.795 "name": "Nvme0", 00:27:31.795 "trtype": "tcp", 00:27:31.795 "traddr": "10.0.0.2", 00:27:31.795 "adrfam": "ipv4", 00:27:31.795 "trsvcid": "4420", 00:27:31.795 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.795 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:31.795 "hdgst": false, 00:27:31.795 "ddgst": false 00:27:31.795 }, 00:27:31.795 "method": "bdev_nvme_attach_controller" 00:27:31.795 }' 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:31.795 16:04:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:32.054 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:32.054 ... 00:27:32.054 fio-3.35 00:27:32.054 Starting 3 threads 00:27:32.054 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.612 00:27:38.612 filename0: (groupid=0, jobs=1): err= 0: pid=876941: Fri Jul 12 16:04:34 2024 00:27:38.612 read: IOPS=228, BW=28.5MiB/s (29.9MB/s)(143MiB/5006msec) 00:27:38.612 slat (nsec): min=6197, max=52977, avg=19574.61, stdev=7044.73 00:27:38.612 clat (usec): min=7107, max=51931, avg=13124.84, stdev=2825.35 00:27:38.612 lat (usec): min=7123, max=51950, avg=13144.41, stdev=2825.32 00:27:38.612 clat percentiles (usec): 00:27:38.612 | 1.00th=[ 8094], 5.00th=[10290], 10.00th=[10683], 20.00th=[11338], 00:27:38.612 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12780], 60.00th=[13435], 00:27:38.612 | 70.00th=[14091], 80.00th=[14877], 90.00th=[16057], 95.00th=[16581], 00:27:38.612 | 99.00th=[17695], 99.50th=[18482], 99.90th=[51119], 99.95th=[52167], 00:27:38.612 | 99.99th=[52167] 00:27:38.612 bw ( KiB/s): min=26624, max=31232, per=30.98%, avg=29164.10, stdev=1712.08, samples=10 00:27:38.612 iops : min= 208, max= 244, avg=227.80, stdev=13.38, samples=10 00:27:38.612 lat (msec) : 10=4.12%, 20=95.62%, 100=0.26% 00:27:38.612 cpu : usr=93.35%, sys=6.13%, ctx=11, majf=0, minf=115 00:27:38.612 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.612 issued rwts: total=1142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.612 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:38.612 filename0: (groupid=0, jobs=1): err= 0: pid=876942: Fri Jul 12 16:04:34 2024 00:27:38.612 read: IOPS=283, BW=35.5MiB/s (37.2MB/s)(179MiB/5048msec) 00:27:38.612 slat (nsec): min=5579, max=68608, avg=20107.89, stdev=8398.10 00:27:38.612 clat (usec): min=6181, max=52651, avg=10512.64, stdev=1860.38 00:27:38.612 lat (usec): min=6195, max=52666, avg=10532.74, stdev=1860.31 00:27:38.612 clat percentiles (usec): 00:27:38.612 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9634], 00:27:38.612 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:27:38.612 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:27:38.612 | 99.00th=[12780], 99.50th=[13042], 99.90th=[48497], 99.95th=[52691], 00:27:38.612 | 99.99th=[52691] 00:27:38.612 bw ( KiB/s): min=34816, max=38656, per=38.89%, avg=36608.00, stdev=1224.76, samples=10 00:27:38.612 iops : min= 272, max= 302, avg=286.00, stdev= 9.57, samples=10 00:27:38.612 lat (msec) : 10=29.31%, 20=70.55%, 50=0.07%, 100=0.07% 00:27:38.612 cpu : usr=83.20%, sys=9.95%, ctx=352, majf=0, minf=147 00:27:38.612 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.612 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.612 issued rwts: total=1433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.612 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:38.612 filename0: (groupid=0, jobs=1): err= 0: pid=876943: Fri Jul 12 16:04:34 2024 00:27:38.612 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(142MiB/5005msec) 00:27:38.612 slat (nsec): min=6398, max=46289, avg=18003.88, stdev=5436.11 00:27:38.612 clat (usec): min=5906, max=53490, avg=13183.32, stdev=3914.43 00:27:38.612 lat (usec): min=5923, max=53514, avg=13201.32, stdev=3914.46 00:27:38.612 clat percentiles (usec): 00:27:38.612 | 1.00th=[ 8094], 5.00th=[10421], 10.00th=[10945], 20.00th=[11469], 00:27:38.612 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12649], 60.00th=[13042], 00:27:38.612 | 70.00th=[13698], 80.00th=[14484], 90.00th=[15401], 95.00th=[16057], 00:27:38.612 | 99.00th=[21890], 99.50th=[51643], 99.90th=[53216], 99.95th=[53740], 00:27:38.612 | 99.99th=[53740] 00:27:38.612 bw ( KiB/s): min=25344, max=32000, per=30.84%, avg=29030.40, stdev=1957.84, samples=10 00:27:38.612 iops : min= 198, max= 250, avg=226.80, stdev=15.30, samples=10 00:27:38.612 lat (msec) : 10=2.02%, 20=96.92%, 50=0.26%, 100=0.79% 00:27:38.613 cpu : usr=94.20%, sys=5.24%, ctx=12, majf=0, minf=105 00:27:38.613 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.613 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.613 issued rwts: total=1137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.613 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:38.613 00:27:38.613 Run status group 0 (all jobs): 00:27:38.613 READ: bw=91.9MiB/s (96.4MB/s), 28.4MiB/s-35.5MiB/s (29.8MB/s-37.2MB/s), io=464MiB (487MB), run=5005-5048msec 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 bdev_null0 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 [2024-07-12 16:04:35.217407] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 bdev_null1 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 bdev_null2 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.613 { 00:27:38.613 "params": { 00:27:38.613 "name": "Nvme$subsystem", 00:27:38.613 "trtype": "$TEST_TRANSPORT", 00:27:38.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.613 "adrfam": "ipv4", 00:27:38.613 "trsvcid": "$NVMF_PORT", 00:27:38.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.613 "hdgst": ${hdgst:-false}, 00:27:38.613 "ddgst": ${ddgst:-false} 00:27:38.613 }, 00:27:38.613 "method": "bdev_nvme_attach_controller" 00:27:38.613 } 00:27:38.613 EOF 00:27:38.613 )") 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.613 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.613 { 00:27:38.613 "params": { 00:27:38.613 "name": "Nvme$subsystem", 00:27:38.613 "trtype": "$TEST_TRANSPORT", 00:27:38.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.613 "adrfam": "ipv4", 00:27:38.614 "trsvcid": "$NVMF_PORT", 00:27:38.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.614 "hdgst": ${hdgst:-false}, 00:27:38.614 "ddgst": ${ddgst:-false} 00:27:38.614 }, 00:27:38.614 "method": "bdev_nvme_attach_controller" 00:27:38.614 } 00:27:38.614 EOF 00:27:38.614 )") 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:38.614 { 00:27:38.614 "params": { 00:27:38.614 "name": "Nvme$subsystem", 00:27:38.614 "trtype": "$TEST_TRANSPORT", 00:27:38.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:38.614 "adrfam": "ipv4", 00:27:38.614 "trsvcid": "$NVMF_PORT", 00:27:38.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:38.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:38.614 "hdgst": ${hdgst:-false}, 00:27:38.614 "ddgst": ${ddgst:-false} 00:27:38.614 }, 00:27:38.614 "method": "bdev_nvme_attach_controller" 00:27:38.614 } 00:27:38.614 EOF 00:27:38.614 )") 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:38.614 "params": { 00:27:38.614 "name": "Nvme0", 00:27:38.614 "trtype": "tcp", 00:27:38.614 "traddr": "10.0.0.2", 00:27:38.614 "adrfam": "ipv4", 00:27:38.614 "trsvcid": "4420", 00:27:38.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:38.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:38.614 "hdgst": false, 00:27:38.614 "ddgst": false 00:27:38.614 }, 00:27:38.614 "method": "bdev_nvme_attach_controller" 00:27:38.614 },{ 00:27:38.614 "params": { 00:27:38.614 "name": "Nvme1", 00:27:38.614 "trtype": "tcp", 00:27:38.614 "traddr": "10.0.0.2", 00:27:38.614 "adrfam": "ipv4", 00:27:38.614 "trsvcid": "4420", 00:27:38.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:38.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:38.614 "hdgst": false, 00:27:38.614 "ddgst": false 00:27:38.614 }, 00:27:38.614 "method": "bdev_nvme_attach_controller" 00:27:38.614 },{ 00:27:38.614 "params": { 00:27:38.614 "name": "Nvme2", 00:27:38.614 "trtype": "tcp", 00:27:38.614 "traddr": "10.0.0.2", 00:27:38.614 "adrfam": "ipv4", 00:27:38.614 "trsvcid": "4420", 00:27:38.614 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:38.614 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:38.614 "hdgst": false, 00:27:38.614 "ddgst": false 00:27:38.614 }, 00:27:38.614 "method": "bdev_nvme_attach_controller" 00:27:38.614 }' 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:38.614 16:04:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:38.614 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:38.614 ... 00:27:38.614 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:38.614 ... 00:27:38.614 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:38.614 ... 00:27:38.614 fio-3.35 00:27:38.614 Starting 24 threads 00:27:38.614 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.805 00:27:50.805 filename0: (groupid=0, jobs=1): err= 0: pid=877785: Fri Jul 12 16:04:46 2024 00:27:50.805 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10012msec) 00:27:50.805 slat (nsec): min=4893, max=94380, avg=34935.61, stdev=11786.76 00:27:50.805 clat (usec): min=16124, max=56149, avg=34222.30, stdev=3999.12 00:27:50.805 lat (usec): min=16170, max=56161, avg=34257.24, stdev=3998.94 00:27:50.805 clat percentiles (usec): 00:27:50.805 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.805 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.805 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.805 | 99.00th=[46400], 99.50th=[46924], 99.90th=[56361], 99.95th=[56361], 00:27:50.805 | 99.99th=[56361] 00:27:50.805 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1845.89, stdev=161.43, samples=19 00:27:50.805 iops : min= 352, max= 480, avg=461.47, stdev=40.36, samples=19 00:27:50.806 lat (msec) : 20=0.43%, 50=99.22%, 100=0.34% 00:27:50.806 cpu : usr=97.93%, sys=1.45%, ctx=40, majf=0, minf=9 00:27:50.806 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.806 filename0: (groupid=0, jobs=1): err= 0: pid=877786: Fri Jul 12 16:04:46 2024 00:27:50.806 read: IOPS=472, BW=1890KiB/s (1935kB/s)(18.5MiB/10025msec) 00:27:50.806 slat (usec): min=4, max=165, avg=26.23, stdev=22.73 00:27:50.806 clat (usec): min=1565, max=46879, avg=33637.30, stdev=5525.64 00:27:50.806 lat (usec): min=1574, max=46910, avg=33663.53, stdev=5522.57 00:27:50.806 clat percentiles (usec): 00:27:50.806 | 1.00th=[ 3720], 5.00th=[31065], 10.00th=[31851], 20.00th=[32637], 00:27:50.806 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.806 | 70.00th=[33424], 80.00th=[33817], 90.00th=[42730], 95.00th=[43254], 00:27:50.806 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:27:50.806 | 99.99th=[46924] 00:27:50.806 bw ( KiB/s): min= 1408, max= 2560, per=4.25%, avg=1888.00, stdev=226.99, samples=20 00:27:50.806 iops : min= 352, max= 640, avg=472.00, stdev=56.75, samples=20 00:27:50.806 lat (msec) : 2=0.68%, 4=0.68%, 10=0.34%, 20=0.34%, 50=97.97% 00:27:50.806 cpu : usr=97.22%, sys=2.03%, ctx=77, majf=0, minf=9 00:27:50.806 IO depths : 1=5.7%, 2=11.9%, 4=24.7%, 8=50.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:27:50.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.806 filename0: (groupid=0, jobs=1): err= 0: pid=877787: Fri Jul 12 16:04:46 2024 00:27:50.806 read: IOPS=461, BW=1848KiB/s (1892kB/s)(18.1MiB/10009msec) 00:27:50.806 slat (usec): min=8, max=100, avg=31.08, stdev=21.38 00:27:50.806 clat (usec): min=16052, max=70173, avg=34369.28, stdev=4247.78 00:27:50.806 lat (usec): min=16095, max=70189, avg=34400.36, stdev=4241.15 00:27:50.806 clat percentiles (usec): 00:27:50.806 | 1.00th=[30540], 5.00th=[31327], 10.00th=[32113], 20.00th=[32637], 00:27:50.806 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.806 | 70.00th=[33424], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.806 | 99.00th=[46400], 99.50th=[46400], 99.90th=[69731], 99.95th=[69731], 00:27:50.806 | 99.99th=[69731] 00:27:50.806 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1843.20, stdev=168.18, samples=20 00:27:50.806 iops : min= 352, max= 512, avg=460.80, stdev=42.04, samples=20 00:27:50.806 lat (msec) : 20=0.09%, 50=99.57%, 100=0.35% 00:27:50.806 cpu : usr=97.54%, sys=1.66%, ctx=61, majf=0, minf=9 00:27:50.806 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.806 filename0: (groupid=0, jobs=1): err= 0: pid=877788: Fri Jul 12 16:04:46 2024 00:27:50.806 read: IOPS=463, BW=1854KiB/s (1899kB/s)(18.1MiB/10010msec) 00:27:50.806 slat (nsec): min=7897, max=85432, avg=34255.22, stdev=10079.45 00:27:50.806 clat (usec): min=16034, max=55874, avg=34193.41, stdev=3905.04 00:27:50.806 lat (usec): min=16053, max=55891, avg=34227.67, stdev=3904.29 00:27:50.806 clat percentiles (usec): 00:27:50.806 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.806 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.806 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.806 | 99.00th=[45876], 99.50th=[46400], 99.90th=[55837], 99.95th=[55837], 00:27:50.806 | 99.99th=[55837] 00:27:50.806 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1845.89, stdev=161.43, samples=19 00:27:50.806 iops : min= 352, max= 480, avg=461.47, stdev=40.36, samples=19 00:27:50.806 lat (msec) : 20=0.34%, 50=99.31%, 100=0.34% 00:27:50.806 cpu : usr=97.12%, sys=2.12%, ctx=64, majf=0, minf=9 00:27:50.806 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.806 filename0: (groupid=0, jobs=1): err= 0: pid=877789: Fri Jul 12 16:04:46 2024 00:27:50.806 read: IOPS=464, BW=1857KiB/s (1901kB/s)(18.2MiB/10031msec) 00:27:50.806 slat (nsec): min=5110, max=95960, avg=33109.01, stdev=14203.04 00:27:50.806 clat (usec): min=20989, max=46675, avg=34181.82, stdev=3679.57 00:27:50.806 lat (usec): min=20996, max=46698, avg=34214.93, stdev=3673.99 00:27:50.806 clat percentiles (usec): 00:27:50.806 | 1.00th=[30802], 5.00th=[31327], 10.00th=[32113], 20.00th=[32637], 00:27:50.806 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.806 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.806 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:27:50.806 | 99.99th=[46924] 00:27:50.806 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1856.00, stdev=163.50, samples=20 00:27:50.806 iops : min= 352, max= 512, avg=464.00, stdev=40.87, samples=20 00:27:50.806 lat (msec) : 50=100.00% 00:27:50.806 cpu : usr=96.53%, sys=2.29%, ctx=176, majf=0, minf=9 00:27:50.806 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.806 filename0: (groupid=0, jobs=1): err= 0: pid=877790: Fri Jul 12 16:04:46 2024 00:27:50.806 read: IOPS=464, BW=1857KiB/s (1902kB/s)(18.2MiB/10027msec) 00:27:50.806 slat (nsec): min=6913, max=96812, avg=36693.64, stdev=10791.99 00:27:50.806 clat (usec): min=16967, max=46634, avg=34134.70, stdev=3701.68 00:27:50.806 lat (usec): min=16992, max=46661, avg=34171.40, stdev=3700.85 00:27:50.806 clat percentiles (usec): 00:27:50.806 | 1.00th=[30802], 5.00th=[31327], 10.00th=[32113], 20.00th=[32637], 00:27:50.806 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.806 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.806 | 99.00th=[44827], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:27:50.806 | 99.99th=[46400] 00:27:50.806 bw ( KiB/s): min= 1408, max= 2052, per=4.17%, avg=1852.50, stdev=163.07, samples=20 00:27:50.806 iops : min= 352, max= 513, avg=463.10, stdev=40.77, samples=20 00:27:50.806 lat (msec) : 20=0.34%, 50=99.66% 00:27:50.806 cpu : usr=97.68%, sys=1.66%, ctx=78, majf=0, minf=9 00:27:50.806 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.806 filename0: (groupid=0, jobs=1): err= 0: pid=877791: Fri Jul 12 16:04:46 2024 00:27:50.806 read: IOPS=462, BW=1848KiB/s (1892kB/s)(18.1MiB/10008msec) 00:27:50.806 slat (nsec): min=6110, max=75097, avg=32701.74, stdev=11726.29 00:27:50.806 clat (usec): min=29507, max=71249, avg=34332.20, stdev=4097.12 00:27:50.806 lat (usec): min=29523, max=71263, avg=34364.90, stdev=4097.73 00:27:50.806 clat percentiles (usec): 00:27:50.806 | 1.00th=[31065], 5.00th=[31589], 10.00th=[32375], 20.00th=[32637], 00:27:50.806 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.806 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.806 | 99.00th=[45876], 99.50th=[46400], 99.90th=[69731], 99.95th=[69731], 00:27:50.806 | 99.99th=[70779] 00:27:50.806 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1843.20, stdev=168.18, samples=20 00:27:50.806 iops : min= 352, max= 512, avg=460.80, stdev=42.04, samples=20 00:27:50.806 lat (msec) : 50=99.65%, 100=0.35% 00:27:50.806 cpu : usr=96.68%, sys=2.17%, ctx=138, majf=0, minf=9 00:27:50.806 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.806 filename0: (groupid=0, jobs=1): err= 0: pid=877792: Fri Jul 12 16:04:46 2024 00:27:50.806 read: IOPS=463, BW=1854KiB/s (1899kB/s)(18.1MiB/10009msec) 00:27:50.806 slat (usec): min=13, max=105, avg=37.36, stdev=13.43 00:27:50.806 clat (usec): min=16099, max=53498, avg=34154.21, stdev=3925.71 00:27:50.806 lat (usec): min=16141, max=53527, avg=34191.57, stdev=3925.77 00:27:50.806 clat percentiles (usec): 00:27:50.806 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.806 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.806 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.806 | 99.00th=[46400], 99.50th=[46400], 99.90th=[53216], 99.95th=[53216], 00:27:50.806 | 99.99th=[53740] 00:27:50.806 bw ( KiB/s): min= 1408, max= 1923, per=4.15%, avg=1846.05, stdev=161.51, samples=19 00:27:50.806 iops : min= 352, max= 480, avg=461.47, stdev=40.36, samples=19 00:27:50.806 lat (msec) : 20=0.39%, 50=99.27%, 100=0.34% 00:27:50.806 cpu : usr=96.04%, sys=2.48%, ctx=226, majf=0, minf=9 00:27:50.806 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.806 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.806 filename1: (groupid=0, jobs=1): err= 0: pid=877793: Fri Jul 12 16:04:46 2024 00:27:50.806 read: IOPS=462, BW=1848KiB/s (1892kB/s)(18.1MiB/10008msec) 00:27:50.806 slat (usec): min=10, max=110, avg=35.99, stdev=14.13 00:27:50.806 clat (usec): min=29648, max=71308, avg=34263.97, stdev=4124.57 00:27:50.806 lat (usec): min=29702, max=71343, avg=34299.96, stdev=4123.84 00:27:50.806 clat percentiles (usec): 00:27:50.806 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.806 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.806 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.806 | 99.00th=[45876], 99.50th=[46400], 99.90th=[69731], 99.95th=[69731], 00:27:50.806 | 99.99th=[71828] 00:27:50.806 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1843.20, stdev=168.18, samples=20 00:27:50.806 iops : min= 352, max= 512, avg=460.80, stdev=42.04, samples=20 00:27:50.806 lat (msec) : 50=99.65%, 100=0.35% 00:27:50.807 cpu : usr=93.40%, sys=3.73%, ctx=733, majf=0, minf=9 00:27:50.807 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.807 filename1: (groupid=0, jobs=1): err= 0: pid=877794: Fri Jul 12 16:04:46 2024 00:27:50.807 read: IOPS=464, BW=1857KiB/s (1902kB/s)(18.2MiB/10028msec) 00:27:50.807 slat (usec): min=9, max=124, avg=33.43, stdev=11.52 00:27:50.807 clat (usec): min=17812, max=46639, avg=34185.42, stdev=3683.61 00:27:50.807 lat (usec): min=17827, max=46666, avg=34218.85, stdev=3682.84 00:27:50.807 clat percentiles (usec): 00:27:50.807 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.807 | 30.00th=[32900], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:27:50.807 | 70.00th=[33424], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.807 | 99.00th=[44827], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:27:50.807 | 99.99th=[46400] 00:27:50.807 bw ( KiB/s): min= 1408, max= 2048, per=4.17%, avg=1852.30, stdev=162.81, samples=20 00:27:50.807 iops : min= 352, max= 512, avg=463.05, stdev=40.70, samples=20 00:27:50.807 lat (msec) : 20=0.34%, 50=99.66% 00:27:50.807 cpu : usr=97.83%, sys=1.60%, ctx=38, majf=0, minf=9 00:27:50.807 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.807 filename1: (groupid=0, jobs=1): err= 0: pid=877795: Fri Jul 12 16:04:46 2024 00:27:50.807 read: IOPS=463, BW=1854KiB/s (1899kB/s)(18.1MiB/10009msec) 00:27:50.807 slat (nsec): min=9270, max=98876, avg=34816.45, stdev=9688.36 00:27:50.807 clat (usec): min=16157, max=53251, avg=34209.85, stdev=3888.61 00:27:50.807 lat (usec): min=16190, max=53268, avg=34244.67, stdev=3888.36 00:27:50.807 clat percentiles (usec): 00:27:50.807 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.807 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.807 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.807 | 99.00th=[46400], 99.50th=[46400], 99.90th=[53216], 99.95th=[53216], 00:27:50.807 | 99.99th=[53216] 00:27:50.807 bw ( KiB/s): min= 1408, max= 1923, per=4.15%, avg=1846.05, stdev=161.51, samples=19 00:27:50.807 iops : min= 352, max= 480, avg=461.47, stdev=40.36, samples=19 00:27:50.807 lat (msec) : 20=0.34%, 50=99.31%, 100=0.34% 00:27:50.807 cpu : usr=97.62%, sys=1.82%, ctx=59, majf=0, minf=9 00:27:50.807 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.807 filename1: (groupid=0, jobs=1): err= 0: pid=877796: Fri Jul 12 16:04:46 2024 00:27:50.807 read: IOPS=462, BW=1848KiB/s (1892kB/s)(18.1MiB/10008msec) 00:27:50.807 slat (usec): min=12, max=108, avg=44.41, stdev=15.90 00:27:50.807 clat (usec): min=29676, max=71277, avg=34231.28, stdev=4151.30 00:27:50.807 lat (usec): min=29721, max=71297, avg=34275.69, stdev=4147.35 00:27:50.807 clat percentiles (usec): 00:27:50.807 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31851], 20.00th=[32375], 00:27:50.807 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.807 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.807 | 99.00th=[45351], 99.50th=[46400], 99.90th=[69731], 99.95th=[69731], 00:27:50.807 | 99.99th=[70779] 00:27:50.807 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1843.20, stdev=168.18, samples=20 00:27:50.807 iops : min= 352, max= 512, avg=460.80, stdev=42.04, samples=20 00:27:50.807 lat (msec) : 50=99.65%, 100=0.35% 00:27:50.807 cpu : usr=97.39%, sys=1.73%, ctx=82, majf=0, minf=9 00:27:50.807 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.807 filename1: (groupid=0, jobs=1): err= 0: pid=877797: Fri Jul 12 16:04:46 2024 00:27:50.807 read: IOPS=463, BW=1855KiB/s (1899kB/s)(18.1MiB/10008msec) 00:27:50.807 slat (nsec): min=8468, max=96769, avg=36355.26, stdev=10757.15 00:27:50.807 clat (usec): min=15833, max=69558, avg=34172.07, stdev=3934.41 00:27:50.807 lat (usec): min=15897, max=69599, avg=34208.43, stdev=3934.44 00:27:50.807 clat percentiles (usec): 00:27:50.807 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.807 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.807 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.807 | 99.00th=[45876], 99.50th=[46400], 99.90th=[52167], 99.95th=[52167], 00:27:50.807 | 99.99th=[69731] 00:27:50.807 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1845.89, stdev=161.43, samples=19 00:27:50.807 iops : min= 352, max= 480, avg=461.47, stdev=40.36, samples=19 00:27:50.807 lat (msec) : 20=0.39%, 50=99.27%, 100=0.34% 00:27:50.807 cpu : usr=97.66%, sys=1.76%, ctx=48, majf=0, minf=9 00:27:50.807 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.807 filename1: (groupid=0, jobs=1): err= 0: pid=877798: Fri Jul 12 16:04:46 2024 00:27:50.807 read: IOPS=466, BW=1865KiB/s (1910kB/s)(18.2MiB/10018msec) 00:27:50.807 slat (usec): min=4, max=125, avg=19.59, stdev=12.11 00:27:50.807 clat (usec): min=9652, max=46935, avg=34134.90, stdev=4073.67 00:27:50.807 lat (usec): min=9658, max=46955, avg=34154.49, stdev=4071.61 00:27:50.807 clat percentiles (usec): 00:27:50.807 | 1.00th=[25560], 5.00th=[31327], 10.00th=[32113], 20.00th=[32637], 00:27:50.807 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33162], 00:27:50.807 | 70.00th=[33424], 80.00th=[33817], 90.00th=[42730], 95.00th=[43254], 00:27:50.807 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:27:50.807 | 99.99th=[46924] 00:27:50.807 bw ( KiB/s): min= 1408, max= 2052, per=4.19%, avg=1862.20, stdev=168.66, samples=20 00:27:50.807 iops : min= 352, max= 513, avg=465.55, stdev=42.17, samples=20 00:27:50.807 lat (msec) : 10=0.15%, 20=0.58%, 50=99.27% 00:27:50.807 cpu : usr=97.38%, sys=1.84%, ctx=75, majf=0, minf=9 00:27:50.807 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 issued rwts: total=4672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.807 filename1: (groupid=0, jobs=1): err= 0: pid=877799: Fri Jul 12 16:04:46 2024 00:27:50.807 read: IOPS=463, BW=1854KiB/s (1899kB/s)(18.1MiB/10009msec) 00:27:50.807 slat (nsec): min=8857, max=78908, avg=34308.36, stdev=9512.89 00:27:50.807 clat (usec): min=16261, max=56542, avg=34194.97, stdev=3890.59 00:27:50.807 lat (usec): min=16270, max=56572, avg=34229.28, stdev=3890.64 00:27:50.807 clat percentiles (usec): 00:27:50.807 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.807 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.807 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.807 | 99.00th=[45876], 99.50th=[46400], 99.90th=[55313], 99.95th=[55313], 00:27:50.807 | 99.99th=[56361] 00:27:50.807 bw ( KiB/s): min= 1408, max= 1923, per=4.15%, avg=1846.05, stdev=161.51, samples=19 00:27:50.807 iops : min= 352, max= 480, avg=461.47, stdev=40.36, samples=19 00:27:50.807 lat (msec) : 20=0.34%, 50=99.31%, 100=0.34% 00:27:50.807 cpu : usr=97.25%, sys=1.86%, ctx=52, majf=0, minf=9 00:27:50.807 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.807 filename1: (groupid=0, jobs=1): err= 0: pid=877800: Fri Jul 12 16:04:46 2024 00:27:50.807 read: IOPS=463, BW=1854KiB/s (1899kB/s)(18.1MiB/10009msec) 00:27:50.807 slat (usec): min=12, max=147, avg=36.59, stdev=10.19 00:27:50.807 clat (usec): min=15631, max=69556, avg=34186.09, stdev=4235.23 00:27:50.807 lat (usec): min=15685, max=69596, avg=34222.68, stdev=4234.72 00:27:50.807 clat percentiles (usec): 00:27:50.807 | 1.00th=[26084], 5.00th=[31327], 10.00th=[32113], 20.00th=[32637], 00:27:50.807 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.807 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.807 | 99.00th=[46400], 99.50th=[52167], 99.90th=[54264], 99.95th=[55313], 00:27:50.807 | 99.99th=[69731] 00:27:50.807 bw ( KiB/s): min= 1408, max= 1936, per=4.15%, avg=1845.89, stdev=161.52, samples=19 00:27:50.807 iops : min= 352, max= 484, avg=461.47, stdev=40.38, samples=19 00:27:50.807 lat (msec) : 20=0.73%, 50=98.71%, 100=0.56% 00:27:50.807 cpu : usr=97.57%, sys=1.83%, ctx=53, majf=0, minf=9 00:27:50.807 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:27:50.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.807 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.807 filename2: (groupid=0, jobs=1): err= 0: pid=877801: Fri Jul 12 16:04:46 2024 00:27:50.807 read: IOPS=463, BW=1854KiB/s (1899kB/s)(18.1MiB/10010msec) 00:27:50.807 slat (usec): min=8, max=180, avg=37.33, stdev=13.47 00:27:50.807 clat (usec): min=15767, max=53889, avg=34157.84, stdev=3930.45 00:27:50.807 lat (usec): min=15827, max=53908, avg=34195.17, stdev=3930.36 00:27:50.807 clat percentiles (usec): 00:27:50.807 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.807 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.808 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.808 | 99.00th=[45876], 99.50th=[46400], 99.90th=[53740], 99.95th=[53740], 00:27:50.808 | 99.99th=[53740] 00:27:50.808 bw ( KiB/s): min= 1408, max= 1920, per=4.15%, avg=1845.89, stdev=161.43, samples=19 00:27:50.808 iops : min= 352, max= 480, avg=461.47, stdev=40.36, samples=19 00:27:50.808 lat (msec) : 20=0.39%, 50=99.27%, 100=0.34% 00:27:50.808 cpu : usr=95.31%, sys=2.84%, ctx=134, majf=0, minf=9 00:27:50.808 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.808 filename2: (groupid=0, jobs=1): err= 0: pid=877802: Fri Jul 12 16:04:46 2024 00:27:50.808 read: IOPS=465, BW=1862KiB/s (1907kB/s)(18.2MiB/10001msec) 00:27:50.808 slat (usec): min=4, max=136, avg=38.69, stdev=14.29 00:27:50.808 clat (usec): min=10712, max=46676, avg=33985.04, stdev=4016.60 00:27:50.808 lat (usec): min=10730, max=46717, avg=34023.73, stdev=4017.52 00:27:50.808 clat percentiles (usec): 00:27:50.808 | 1.00th=[30540], 5.00th=[31327], 10.00th=[32113], 20.00th=[32375], 00:27:50.808 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.808 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.808 | 99.00th=[44827], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:27:50.808 | 99.99th=[46924] 00:27:50.808 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1859.37, stdev=172.62, samples=19 00:27:50.808 iops : min= 352, max= 512, avg=464.84, stdev=43.16, samples=19 00:27:50.808 lat (msec) : 20=0.69%, 50=99.31% 00:27:50.808 cpu : usr=94.73%, sys=3.01%, ctx=214, majf=0, minf=9 00:27:50.808 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.808 filename2: (groupid=0, jobs=1): err= 0: pid=877803: Fri Jul 12 16:04:46 2024 00:27:50.808 read: IOPS=464, BW=1859KiB/s (1904kB/s)(18.2MiB/10017msec) 00:27:50.808 slat (nsec): min=4195, max=56997, avg=11792.25, stdev=4435.22 00:27:50.808 clat (usec): min=18378, max=46908, avg=34303.72, stdev=3745.58 00:27:50.808 lat (usec): min=18391, max=46937, avg=34315.51, stdev=3746.69 00:27:50.808 clat percentiles (usec): 00:27:50.808 | 1.00th=[31065], 5.00th=[31589], 10.00th=[32375], 20.00th=[32900], 00:27:50.808 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:27:50.808 | 70.00th=[33424], 80.00th=[33817], 90.00th=[42730], 95.00th=[43254], 00:27:50.808 | 99.00th=[44303], 99.50th=[45876], 99.90th=[46924], 99.95th=[46924], 00:27:50.808 | 99.99th=[46924] 00:27:50.808 bw ( KiB/s): min= 1408, max= 2048, per=4.18%, avg=1856.00, stdev=183.39, samples=20 00:27:50.808 iops : min= 352, max= 512, avg=464.00, stdev=45.85, samples=20 00:27:50.808 lat (msec) : 20=0.39%, 50=99.61% 00:27:50.808 cpu : usr=97.88%, sys=1.74%, ctx=20, majf=0, minf=9 00:27:50.808 IO depths : 1=6.2%, 2=12.4%, 4=24.8%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.808 filename2: (groupid=0, jobs=1): err= 0: pid=877804: Fri Jul 12 16:04:46 2024 00:27:50.808 read: IOPS=463, BW=1854KiB/s (1899kB/s)(18.1MiB/10009msec) 00:27:50.808 slat (usec): min=8, max=104, avg=35.94, stdev=11.02 00:27:50.808 clat (usec): min=15827, max=70424, avg=34185.87, stdev=3972.47 00:27:50.808 lat (usec): min=15895, max=70442, avg=34221.81, stdev=3972.43 00:27:50.808 clat percentiles (usec): 00:27:50.808 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.808 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.808 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.808 | 99.00th=[46400], 99.50th=[46400], 99.90th=[53216], 99.95th=[53216], 00:27:50.808 | 99.99th=[70779] 00:27:50.808 bw ( KiB/s): min= 1408, max= 1923, per=4.15%, avg=1846.05, stdev=161.51, samples=19 00:27:50.808 iops : min= 352, max= 480, avg=461.47, stdev=40.36, samples=19 00:27:50.808 lat (msec) : 20=0.43%, 50=99.22%, 100=0.34% 00:27:50.808 cpu : usr=97.31%, sys=1.78%, ctx=87, majf=0, minf=9 00:27:50.808 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.808 filename2: (groupid=0, jobs=1): err= 0: pid=877805: Fri Jul 12 16:04:46 2024 00:27:50.808 read: IOPS=463, BW=1853KiB/s (1898kB/s)(18.1MiB/10014msec) 00:27:50.808 slat (usec): min=9, max=104, avg=42.58, stdev=17.02 00:27:50.808 clat (usec): min=15678, max=57771, avg=34164.89, stdev=4082.54 00:27:50.808 lat (usec): min=15722, max=57806, avg=34207.47, stdev=4078.66 00:27:50.808 clat percentiles (usec): 00:27:50.808 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31851], 20.00th=[32375], 00:27:50.808 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.808 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.808 | 99.00th=[46400], 99.50th=[46400], 99.90th=[57934], 99.95th=[57934], 00:27:50.808 | 99.99th=[57934] 00:27:50.808 bw ( KiB/s): min= 1408, max= 2048, per=4.16%, avg=1849.60, stdev=163.37, samples=20 00:27:50.808 iops : min= 352, max= 512, avg=462.40, stdev=40.84, samples=20 00:27:50.808 lat (msec) : 20=0.47%, 50=99.18%, 100=0.34% 00:27:50.808 cpu : usr=96.83%, sys=2.07%, ctx=104, majf=0, minf=9 00:27:50.808 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.808 filename2: (groupid=0, jobs=1): err= 0: pid=877806: Fri Jul 12 16:04:46 2024 00:27:50.808 read: IOPS=462, BW=1848KiB/s (1892kB/s)(18.1MiB/10008msec) 00:27:50.808 slat (nsec): min=8392, max=82012, avg=35917.75, stdev=9656.02 00:27:50.808 clat (usec): min=29315, max=71249, avg=34304.24, stdev=4120.91 00:27:50.808 lat (usec): min=29362, max=71265, avg=34340.16, stdev=4120.22 00:27:50.808 clat percentiles (usec): 00:27:50.808 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.808 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.808 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.808 | 99.00th=[45876], 99.50th=[46400], 99.90th=[69731], 99.95th=[69731], 00:27:50.808 | 99.99th=[70779] 00:27:50.808 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1843.20, stdev=168.18, samples=20 00:27:50.808 iops : min= 352, max= 512, avg=460.80, stdev=42.04, samples=20 00:27:50.808 lat (msec) : 50=99.65%, 100=0.35% 00:27:50.808 cpu : usr=96.46%, sys=2.21%, ctx=192, majf=0, minf=9 00:27:50.808 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.808 filename2: (groupid=0, jobs=1): err= 0: pid=877807: Fri Jul 12 16:04:46 2024 00:27:50.808 read: IOPS=462, BW=1848KiB/s (1892kB/s)(18.1MiB/10008msec) 00:27:50.808 slat (nsec): min=11689, max=79387, avg=36171.96, stdev=9563.98 00:27:50.808 clat (usec): min=30038, max=69981, avg=34306.30, stdev=4116.86 00:27:50.808 lat (usec): min=30090, max=70000, avg=34342.47, stdev=4116.11 00:27:50.808 clat percentiles (usec): 00:27:50.808 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.808 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.808 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.808 | 99.00th=[45351], 99.50th=[46400], 99.90th=[69731], 99.95th=[69731], 00:27:50.808 | 99.99th=[69731] 00:27:50.808 bw ( KiB/s): min= 1408, max= 2048, per=4.15%, avg=1843.20, stdev=168.18, samples=20 00:27:50.808 iops : min= 352, max= 512, avg=460.80, stdev=42.04, samples=20 00:27:50.808 lat (msec) : 50=99.65%, 100=0.35% 00:27:50.808 cpu : usr=97.60%, sys=1.75%, ctx=42, majf=0, minf=9 00:27:50.808 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.808 filename2: (groupid=0, jobs=1): err= 0: pid=877808: Fri Jul 12 16:04:46 2024 00:27:50.808 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10013msec) 00:27:50.808 slat (nsec): min=4169, max=96467, avg=34822.46, stdev=13373.21 00:27:50.808 clat (usec): min=16158, max=57257, avg=34228.45, stdev=3962.39 00:27:50.808 lat (usec): min=16183, max=57335, avg=34263.27, stdev=3962.36 00:27:50.808 clat percentiles (usec): 00:27:50.808 | 1.00th=[30802], 5.00th=[31589], 10.00th=[32113], 20.00th=[32637], 00:27:50.808 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:27:50.808 | 70.00th=[33162], 80.00th=[33424], 90.00th=[42730], 95.00th=[43254], 00:27:50.808 | 99.00th=[45876], 99.50th=[46400], 99.90th=[57410], 99.95th=[57410], 00:27:50.808 | 99.99th=[57410] 00:27:50.808 bw ( KiB/s): min= 1408, max= 2048, per=4.16%, avg=1849.75, stdev=163.31, samples=20 00:27:50.808 iops : min= 352, max= 512, avg=462.40, stdev=40.84, samples=20 00:27:50.808 lat (msec) : 20=0.34%, 50=99.31%, 100=0.34% 00:27:50.808 cpu : usr=97.15%, sys=1.86%, ctx=193, majf=0, minf=9 00:27:50.808 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.808 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.808 00:27:50.808 Run status group 0 (all jobs): 00:27:50.808 READ: bw=43.4MiB/s (45.5MB/s), 1848KiB/s-1890KiB/s (1892kB/s-1935kB/s), io=435MiB (457MB), run=10001-10031msec 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 bdev_null0 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 [2024-07-12 16:04:46.976686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 bdev_null1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.809 { 00:27:50.809 "params": { 00:27:50.809 "name": "Nvme$subsystem", 00:27:50.809 "trtype": "$TEST_TRANSPORT", 00:27:50.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.809 "adrfam": "ipv4", 00:27:50.809 "trsvcid": "$NVMF_PORT", 00:27:50.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.809 "hdgst": ${hdgst:-false}, 00:27:50.809 "ddgst": ${ddgst:-false} 00:27:50.809 }, 00:27:50.809 "method": "bdev_nvme_attach_controller" 00:27:50.809 } 00:27:50.809 EOF 00:27:50.809 )") 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:50.809 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:50.810 { 00:27:50.810 "params": { 00:27:50.810 "name": "Nvme$subsystem", 00:27:50.810 "trtype": "$TEST_TRANSPORT", 00:27:50.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.810 "adrfam": "ipv4", 00:27:50.810 "trsvcid": "$NVMF_PORT", 00:27:50.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.810 "hdgst": ${hdgst:-false}, 00:27:50.810 "ddgst": ${ddgst:-false} 00:27:50.810 }, 00:27:50.810 "method": "bdev_nvme_attach_controller" 00:27:50.810 } 00:27:50.810 EOF 00:27:50.810 )") 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:50.810 "params": { 00:27:50.810 "name": "Nvme0", 00:27:50.810 "trtype": "tcp", 00:27:50.810 "traddr": "10.0.0.2", 00:27:50.810 "adrfam": "ipv4", 00:27:50.810 "trsvcid": "4420", 00:27:50.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:50.810 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:50.810 "hdgst": false, 00:27:50.810 "ddgst": false 00:27:50.810 }, 00:27:50.810 "method": "bdev_nvme_attach_controller" 00:27:50.810 },{ 00:27:50.810 "params": { 00:27:50.810 "name": "Nvme1", 00:27:50.810 "trtype": "tcp", 00:27:50.810 "traddr": "10.0.0.2", 00:27:50.810 "adrfam": "ipv4", 00:27:50.810 "trsvcid": "4420", 00:27:50.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:50.810 "hdgst": false, 00:27:50.810 "ddgst": false 00:27:50.810 }, 00:27:50.810 "method": "bdev_nvme_attach_controller" 00:27:50.810 }' 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:50.810 16:04:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.810 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:50.810 ... 00:27:50.810 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:50.810 ... 00:27:50.810 fio-3.35 00:27:50.810 Starting 4 threads 00:27:50.810 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.070 00:27:56.070 filename0: (groupid=0, jobs=1): err= 0: pid=879186: Fri Jul 12 16:04:53 2024 00:27:56.070 read: IOPS=2007, BW=15.7MiB/s (16.4MB/s)(78.4MiB/5001msec) 00:27:56.070 slat (nsec): min=5478, max=64467, avg=19318.20, stdev=8763.42 00:27:56.070 clat (usec): min=969, max=8206, avg=3920.44, stdev=387.92 00:27:56.070 lat (usec): min=988, max=8226, avg=3939.76, stdev=388.17 00:27:56.070 clat percentiles (usec): 00:27:56.070 | 1.00th=[ 2900], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3720], 00:27:56.070 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3949], 00:27:56.070 | 70.00th=[ 4015], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4424], 00:27:56.070 | 99.00th=[ 5211], 99.50th=[ 5735], 99.90th=[ 6718], 99.95th=[ 6915], 00:27:56.070 | 99.99th=[ 7635] 00:27:56.070 bw ( KiB/s): min=15519, max=16896, per=25.12%, avg=16104.78, stdev=472.50, samples=9 00:27:56.070 iops : min= 1939, max= 2112, avg=2013.00, stdev=59.20, samples=9 00:27:56.070 lat (usec) : 1000=0.01% 00:27:56.070 lat (msec) : 2=0.22%, 4=66.24%, 10=33.53% 00:27:56.070 cpu : usr=94.74%, sys=4.76%, ctx=23, majf=0, minf=64 00:27:56.070 IO depths : 1=0.3%, 2=13.1%, 4=60.7%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:56.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.070 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.070 issued rwts: total=10041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:56.070 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:56.070 filename0: (groupid=0, jobs=1): err= 0: pid=879187: Fri Jul 12 16:04:53 2024 00:27:56.070 read: IOPS=2012, BW=15.7MiB/s (16.5MB/s)(78.6MiB/5001msec) 00:27:56.070 slat (nsec): min=5654, max=66138, avg=21224.75, stdev=9712.53 00:27:56.070 clat (usec): min=689, max=7823, avg=3889.11, stdev=466.11 00:27:56.070 lat (usec): min=703, max=7839, avg=3910.34, stdev=466.66 00:27:56.070 clat percentiles (usec): 00:27:56.070 | 1.00th=[ 2278], 5.00th=[ 3392], 10.00th=[ 3589], 20.00th=[ 3687], 00:27:56.070 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3916], 00:27:56.070 | 70.00th=[ 3982], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4424], 00:27:56.070 | 99.00th=[ 5604], 99.50th=[ 6194], 99.90th=[ 7242], 99.95th=[ 7308], 00:27:56.070 | 99.99th=[ 7635] 00:27:56.070 bw ( KiB/s): min=15328, max=17296, per=25.18%, avg=16140.44, stdev=558.38, samples=9 00:27:56.070 iops : min= 1916, max= 2162, avg=2017.56, stdev=69.80, samples=9 00:27:56.070 lat (usec) : 750=0.02%, 1000=0.15% 00:27:56.070 lat (msec) : 2=0.56%, 4=70.44%, 10=28.84% 00:27:56.070 cpu : usr=95.10%, sys=4.24%, ctx=41, majf=0, minf=37 00:27:56.070 IO depths : 1=0.9%, 2=22.3%, 4=52.1%, 8=24.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:56.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.070 complete : 0=0.0%, 4=90.4%, 8=9.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.070 issued rwts: total=10066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:56.070 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:56.070 filename1: (groupid=0, jobs=1): err= 0: pid=879188: Fri Jul 12 16:04:53 2024 00:27:56.070 read: IOPS=2002, BW=15.6MiB/s (16.4MB/s)(78.3MiB/5002msec) 00:27:56.070 slat (nsec): min=6601, max=72844, avg=19483.28, stdev=9269.68 00:27:56.070 clat (usec): min=589, max=8512, avg=3923.07, stdev=379.90 00:27:56.070 lat (usec): min=603, max=8527, avg=3942.55, stdev=380.25 00:27:56.070 clat percentiles (usec): 00:27:56.070 | 1.00th=[ 3032], 5.00th=[ 3523], 10.00th=[ 3621], 20.00th=[ 3720], 00:27:56.070 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3949], 00:27:56.070 | 70.00th=[ 4015], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4424], 00:27:56.070 | 99.00th=[ 5276], 99.50th=[ 5604], 99.90th=[ 6521], 99.95th=[ 6587], 00:27:56.070 | 99.99th=[ 7177] 00:27:56.070 bw ( KiB/s): min=15456, max=16560, per=25.05%, avg=16062.22, stdev=358.17, samples=9 00:27:56.070 iops : min= 1932, max= 2070, avg=2007.78, stdev=44.77, samples=9 00:27:56.070 lat (usec) : 750=0.03%, 1000=0.05% 00:27:56.070 lat (msec) : 2=0.17%, 4=67.95%, 10=31.80% 00:27:56.070 cpu : usr=96.00%, sys=3.40%, ctx=48, majf=0, minf=31 00:27:56.070 IO depths : 1=0.6%, 2=18.0%, 4=55.1%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:56.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.070 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.070 issued rwts: total=10017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:56.070 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:56.071 filename1: (groupid=0, jobs=1): err= 0: pid=879189: Fri Jul 12 16:04:53 2024 00:27:56.071 read: IOPS=1991, BW=15.6MiB/s (16.3MB/s)(77.8MiB/5001msec) 00:27:56.071 slat (nsec): min=5250, max=65989, avg=20746.81, stdev=9453.86 00:27:56.071 clat (usec): min=773, max=7622, avg=3936.13, stdev=454.71 00:27:56.071 lat (usec): min=787, max=7638, avg=3956.88, stdev=454.45 00:27:56.071 clat percentiles (usec): 00:27:56.071 | 1.00th=[ 2376], 5.00th=[ 3589], 10.00th=[ 3654], 20.00th=[ 3720], 00:27:56.071 | 30.00th=[ 3785], 40.00th=[ 3818], 50.00th=[ 3884], 60.00th=[ 3949], 00:27:56.071 | 70.00th=[ 4015], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4621], 00:27:56.071 | 99.00th=[ 5604], 99.50th=[ 6063], 99.90th=[ 7111], 99.95th=[ 7242], 00:27:56.071 | 99.99th=[ 7635] 00:27:56.071 bw ( KiB/s): min=15390, max=16384, per=24.85%, avg=15928.67, stdev=352.57, samples=9 00:27:56.071 iops : min= 1923, max= 2048, avg=1991.00, stdev=44.22, samples=9 00:27:56.071 lat (usec) : 1000=0.07% 00:27:56.071 lat (msec) : 2=0.61%, 4=67.26%, 10=32.06% 00:27:56.071 cpu : usr=96.44%, sys=3.04%, ctx=7, majf=0, minf=55 00:27:56.071 IO depths : 1=0.6%, 2=21.6%, 4=52.7%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:56.071 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.071 complete : 0=0.0%, 4=90.5%, 8=9.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:56.071 issued rwts: total=9959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:56.071 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:56.071 00:27:56.071 Run status group 0 (all jobs): 00:27:56.071 READ: bw=62.6MiB/s (65.6MB/s), 15.6MiB/s-15.7MiB/s (16.3MB/s-16.5MB/s), io=313MiB (328MB), run=5001-5002msec 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.329 00:27:56.329 real 0m24.563s 00:27:56.329 user 4m30.454s 00:27:56.329 sys 0m7.761s 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:56.329 16:04:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:56.329 ************************************ 00:27:56.329 END TEST fio_dif_rand_params 00:27:56.329 ************************************ 00:27:56.329 16:04:53 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:27:56.329 16:04:53 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:56.329 16:04:53 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:56.329 16:04:53 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:56.329 16:04:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:56.329 ************************************ 00:27:56.329 START TEST fio_dif_digest 00:27:56.329 ************************************ 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:56.329 bdev_null0 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:56.329 [2024-07-12 16:04:53.551744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:56.329 16:04:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:56.329 { 00:27:56.329 "params": { 00:27:56.329 "name": "Nvme$subsystem", 00:27:56.329 "trtype": "$TEST_TRANSPORT", 00:27:56.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:56.329 "adrfam": "ipv4", 00:27:56.329 "trsvcid": "$NVMF_PORT", 00:27:56.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:56.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:56.329 "hdgst": ${hdgst:-false}, 00:27:56.329 "ddgst": ${ddgst:-false} 00:27:56.329 }, 00:27:56.330 "method": "bdev_nvme_attach_controller" 00:27:56.330 } 00:27:56.330 EOF 00:27:56.330 )") 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:56.330 "params": { 00:27:56.330 "name": "Nvme0", 00:27:56.330 "trtype": "tcp", 00:27:56.330 "traddr": "10.0.0.2", 00:27:56.330 "adrfam": "ipv4", 00:27:56.330 "trsvcid": "4420", 00:27:56.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:56.330 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:56.330 "hdgst": true, 00:27:56.330 "ddgst": true 00:27:56.330 }, 00:27:56.330 "method": "bdev_nvme_attach_controller" 00:27:56.330 }' 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:56.330 16:04:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:56.587 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:56.587 ... 00:27:56.587 fio-3.35 00:27:56.587 Starting 3 threads 00:27:56.587 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.795 00:28:08.795 filename0: (groupid=0, jobs=1): err= 0: pid=880064: Fri Jul 12 16:05:04 2024 00:28:08.795 read: IOPS=220, BW=27.5MiB/s (28.9MB/s)(277MiB/10046msec) 00:28:08.795 slat (nsec): min=5777, max=52740, avg=18621.28, stdev=5635.23 00:28:08.795 clat (usec): min=7080, max=53238, avg=13583.55, stdev=1579.37 00:28:08.795 lat (usec): min=7107, max=53253, avg=13602.17, stdev=1579.38 00:28:08.795 clat percentiles (usec): 00:28:08.795 | 1.00th=[10421], 5.00th=[11863], 10.00th=[12256], 20.00th=[12780], 00:28:08.795 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:28:08.795 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[15270], 00:28:08.795 | 99.00th=[16188], 99.50th=[16909], 99.90th=[17957], 99.95th=[49546], 00:28:08.795 | 99.99th=[53216] 00:28:08.795 bw ( KiB/s): min=27136, max=30976, per=33.97%, avg=28277.90, stdev=849.29, samples=20 00:28:08.795 iops : min= 212, max= 242, avg=220.90, stdev= 6.66, samples=20 00:28:08.795 lat (msec) : 10=0.90%, 20=99.01%, 50=0.05%, 100=0.05% 00:28:08.795 cpu : usr=94.53%, sys=4.77%, ctx=118, majf=0, minf=131 00:28:08.795 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.795 issued rwts: total=2212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.795 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:08.795 filename0: (groupid=0, jobs=1): err= 0: pid=880065: Fri Jul 12 16:05:04 2024 00:28:08.795 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(265MiB/10044msec) 00:28:08.795 slat (nsec): min=8570, max=98261, avg=18539.41, stdev=4846.82 00:28:08.795 clat (usec): min=8453, max=50319, avg=14197.38, stdev=1559.48 00:28:08.795 lat (usec): min=8468, max=50340, avg=14215.92, stdev=1559.55 00:28:08.795 clat percentiles (usec): 00:28:08.795 | 1.00th=[10028], 5.00th=[12518], 10.00th=[12911], 20.00th=[13435], 00:28:08.795 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14353], 00:28:08.795 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15533], 95.00th=[15926], 00:28:08.795 | 99.00th=[16909], 99.50th=[17171], 99.90th=[19530], 99.95th=[49021], 00:28:08.795 | 99.99th=[50070] 00:28:08.795 bw ( KiB/s): min=25856, max=29952, per=32.51%, avg=27059.20, stdev=902.62, samples=20 00:28:08.795 iops : min= 202, max= 234, avg=211.40, stdev= 7.05, samples=20 00:28:08.795 lat (msec) : 10=1.04%, 20=98.87%, 50=0.05%, 100=0.05% 00:28:08.795 cpu : usr=93.09%, sys=5.60%, ctx=541, majf=0, minf=196 00:28:08.795 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.795 issued rwts: total=2116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.795 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:08.795 filename0: (groupid=0, jobs=1): err= 0: pid=880066: Fri Jul 12 16:05:04 2024 00:28:08.795 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(276MiB/10048msec) 00:28:08.795 slat (nsec): min=7542, max=89644, avg=19725.38, stdev=6113.46 00:28:08.795 clat (usec): min=9857, max=55370, avg=13619.68, stdev=2613.68 00:28:08.795 lat (usec): min=9871, max=55389, avg=13639.41, stdev=2613.90 00:28:08.795 clat percentiles (usec): 00:28:08.795 | 1.00th=[11076], 5.00th=[11863], 10.00th=[12256], 20.00th=[12649], 00:28:08.795 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:28:08.795 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14877], 95.00th=[15139], 00:28:08.795 | 99.00th=[16057], 99.50th=[17695], 99.90th=[54789], 99.95th=[55313], 00:28:08.795 | 99.99th=[55313] 00:28:08.795 bw ( KiB/s): min=23552, max=29952, per=33.89%, avg=28211.20, stdev=1296.33, samples=20 00:28:08.795 iops : min= 184, max= 234, avg=220.40, stdev=10.13, samples=20 00:28:08.795 lat (msec) : 10=0.05%, 20=99.59%, 50=0.05%, 100=0.32% 00:28:08.795 cpu : usr=92.54%, sys=6.54%, ctx=108, majf=0, minf=277 00:28:08.795 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:08.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:08.795 issued rwts: total=2206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:08.795 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:08.795 00:28:08.795 Run status group 0 (all jobs): 00:28:08.795 READ: bw=81.3MiB/s (85.2MB/s), 26.3MiB/s-27.5MiB/s (27.6MB/s-28.9MB/s), io=817MiB (856MB), run=10044-10048msec 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.795 00:28:08.795 real 0m11.178s 00:28:08.795 user 0m29.284s 00:28:08.795 sys 0m1.950s 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:08.795 16:05:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:08.795 ************************************ 00:28:08.795 END TEST fio_dif_digest 00:28:08.795 ************************************ 00:28:08.795 16:05:04 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:08.795 16:05:04 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:08.795 16:05:04 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:08.795 16:05:04 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:08.795 16:05:04 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:08.795 16:05:04 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:08.795 16:05:04 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:08.795 16:05:04 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:08.795 16:05:04 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:08.795 rmmod nvme_tcp 00:28:08.795 rmmod nvme_fabrics 00:28:08.795 rmmod nvme_keyring 00:28:08.795 16:05:04 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:08.795 16:05:04 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:08.795 16:05:04 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:08.795 16:05:04 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 873889 ']' 00:28:08.795 16:05:04 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 873889 00:28:08.795 16:05:04 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 873889 ']' 00:28:08.795 16:05:04 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 873889 00:28:08.795 16:05:04 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:28:08.795 16:05:04 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:08.795 16:05:04 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 873889 00:28:08.795 16:05:04 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:08.795 16:05:04 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:08.795 16:05:04 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 873889' 00:28:08.795 killing process with pid 873889 00:28:08.795 16:05:04 nvmf_dif -- common/autotest_common.sh@967 -- # kill 873889 00:28:08.795 16:05:04 nvmf_dif -- common/autotest_common.sh@972 -- # wait 873889 00:28:08.795 16:05:05 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:08.795 16:05:05 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:09.053 Waiting for block devices as requested 00:28:09.053 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:28:09.310 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:09.310 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:09.310 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:09.567 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:09.567 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:09.567 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:09.567 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:09.824 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:09.824 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:09.824 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:09.824 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:10.082 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:10.082 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:10.082 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:10.082 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:10.340 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:10.340 16:05:07 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:10.340 16:05:07 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:10.340 16:05:07 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.340 16:05:07 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:10.340 16:05:07 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.340 16:05:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:10.340 16:05:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.877 16:05:09 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:12.877 00:28:12.877 real 1m7.354s 00:28:12.877 user 6m26.564s 00:28:12.877 sys 0m20.245s 00:28:12.877 16:05:09 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:12.877 16:05:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:12.877 ************************************ 00:28:12.877 END TEST nvmf_dif 00:28:12.877 ************************************ 00:28:12.877 16:05:09 -- common/autotest_common.sh@1142 -- # return 0 00:28:12.877 16:05:09 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:12.877 16:05:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:12.877 16:05:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.877 16:05:09 -- common/autotest_common.sh@10 -- # set +x 00:28:12.877 ************************************ 00:28:12.877 START TEST nvmf_abort_qd_sizes 00:28:12.877 ************************************ 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:12.877 * Looking for test storage... 00:28:12.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:12.877 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:12.878 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.878 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:12.878 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:12.878 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:12.878 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.878 16:05:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:12.878 16:05:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.878 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:12.878 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:12.878 16:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:28:12.878 16:05:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:14.843 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:14.843 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:14.843 Found net devices under 0000:84:00.0: cvl_0_0 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:14.843 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:14.844 Found net devices under 0000:84:00.1: cvl_0_1 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:14.844 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.844 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:28:14.844 00:28:14.844 --- 10.0.0.2 ping statistics --- 00:28:14.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.844 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:28:14.844 00:28:14.844 --- 10.0.0.1 ping statistics --- 00:28:14.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.844 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:14.844 16:05:11 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:16.218 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:16.218 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:16.218 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:16.218 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:16.218 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:16.218 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:16.218 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:16.218 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:16.218 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:16.218 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:16.218 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:16.218 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:16.218 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:16.218 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:16.218 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:16.218 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:17.158 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=884998 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 884998 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 884998 ']' 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:17.158 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:17.158 [2024-07-12 16:05:14.369662] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:28:17.158 [2024-07-12 16:05:14.369767] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.158 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.158 [2024-07-12 16:05:14.438622] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.416 [2024-07-12 16:05:14.562097] app.c: 607:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.416 [2024-07-12 16:05:14.562151] app.c: 608:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.416 [2024-07-12 16:05:14.562165] app.c: 613:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.416 [2024-07-12 16:05:14.562176] app.c: 614:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.416 [2024-07-12 16:05:14.562186] app.c: 615:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.416 [2024-07-12 16:05:14.562269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.416 [2024-07-12 16:05:14.562359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.416 [2024-07-12 16:05:14.562426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.416 [2024-07-12 16:05:14.562429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.416 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.416 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:28:17.416 16:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:17.416 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:17.416 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:17.675 16:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:17.675 ************************************ 00:28:17.675 START TEST spdk_target_abort 00:28:17.675 ************************************ 00:28:17.675 16:05:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:28:17.675 16:05:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:17.675 16:05:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:28:17.675 16:05:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.675 16:05:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:20.975 spdk_targetn1 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:20.975 [2024-07-12 16:05:17.598645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:20.975 [2024-07-12 16:05:17.630943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:20.975 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.976 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:20.976 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:20.976 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:20.976 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:20.976 16:05:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:20.976 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.511 Initializing NVMe Controllers 00:28:23.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:23.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:23.511 Initialization complete. Launching workers. 00:28:23.511 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11914, failed: 0 00:28:23.511 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1324, failed to submit 10590 00:28:23.511 success 715, unsuccess 609, failed 0 00:28:23.511 16:05:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:23.511 16:05:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:23.511 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.785 Initializing NVMe Controllers 00:28:26.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:26.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:26.785 Initialization complete. Launching workers. 00:28:26.785 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8574, failed: 0 00:28:26.785 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1228, failed to submit 7346 00:28:26.785 success 345, unsuccess 883, failed 0 00:28:26.785 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:26.785 16:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:26.785 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.064 Initializing NVMe Controllers 00:28:30.064 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:30.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:30.064 Initialization complete. Launching workers. 00:28:30.064 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31441, failed: 0 00:28:30.064 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2854, failed to submit 28587 00:28:30.064 success 546, unsuccess 2308, failed 0 00:28:30.064 16:05:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:30.064 16:05:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.064 16:05:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:30.064 16:05:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.064 16:05:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:30.064 16:05:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.064 16:05:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:31.439 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.439 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 884998 00:28:31.439 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 884998 ']' 00:28:31.439 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 884998 00:28:31.439 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:28:31.439 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:31.439 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 884998 00:28:31.439 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:31.439 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:31.439 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 884998' 00:28:31.439 killing process with pid 884998 00:28:31.439 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 884998 00:28:31.439 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 884998 00:28:31.697 00:28:31.697 real 0m14.106s 00:28:31.697 user 0m53.389s 00:28:31.697 sys 0m2.735s 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:31.697 ************************************ 00:28:31.697 END TEST spdk_target_abort 00:28:31.697 ************************************ 00:28:31.697 16:05:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:31.697 16:05:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:31.697 16:05:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:31.697 16:05:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:31.697 16:05:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:31.697 ************************************ 00:28:31.697 START TEST kernel_target_abort 00:28:31.697 ************************************ 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:31.697 16:05:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:33.071 Waiting for block devices as requested 00:28:33.071 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:28:33.071 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:33.071 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:33.071 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:33.329 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:33.329 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:33.329 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:33.329 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:33.586 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:33.586 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:33.586 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:33.586 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:33.845 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:33.845 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:33.845 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:34.104 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:34.104 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:34.104 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:34.104 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:34.104 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:34.104 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:34.104 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:34.104 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:34.104 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:34.104 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:34.104 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:34.104 No valid GPT data, bailing 00:28:34.104 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:28:34.363 00:28:34.363 Discovery Log Number of Records 2, Generation counter 2 00:28:34.363 =====Discovery Log Entry 0====== 00:28:34.363 trtype: tcp 00:28:34.363 adrfam: ipv4 00:28:34.363 subtype: current discovery subsystem 00:28:34.363 treq: not specified, sq flow control disable supported 00:28:34.363 portid: 1 00:28:34.363 trsvcid: 4420 00:28:34.363 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:34.363 traddr: 10.0.0.1 00:28:34.363 eflags: none 00:28:34.363 sectype: none 00:28:34.363 =====Discovery Log Entry 1====== 00:28:34.363 trtype: tcp 00:28:34.363 adrfam: ipv4 00:28:34.363 subtype: nvme subsystem 00:28:34.363 treq: not specified, sq flow control disable supported 00:28:34.363 portid: 1 00:28:34.363 trsvcid: 4420 00:28:34.363 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:34.363 traddr: 10.0.0.1 00:28:34.363 eflags: none 00:28:34.363 sectype: none 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:34.363 16:05:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:34.363 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.647 Initializing NVMe Controllers 00:28:37.647 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:37.647 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:37.647 Initialization complete. Launching workers. 00:28:37.647 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55255, failed: 0 00:28:37.647 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 55255, failed to submit 0 00:28:37.647 success 0, unsuccess 55255, failed 0 00:28:37.647 16:05:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:37.647 16:05:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:37.647 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.925 Initializing NVMe Controllers 00:28:40.925 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:40.925 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:40.925 Initialization complete. Launching workers. 00:28:40.925 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98862, failed: 0 00:28:40.925 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24938, failed to submit 73924 00:28:40.925 success 0, unsuccess 24938, failed 0 00:28:40.925 16:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:40.925 16:05:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:40.925 EAL: No free 2048 kB hugepages reported on node 1 00:28:44.203 Initializing NVMe Controllers 00:28:44.203 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:44.203 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:44.203 Initialization complete. Launching workers. 00:28:44.203 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95556, failed: 0 00:28:44.203 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23890, failed to submit 71666 00:28:44.203 success 0, unsuccess 23890, failed 0 00:28:44.203 16:05:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:44.203 16:05:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:44.203 16:05:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:28:44.203 16:05:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:44.203 16:05:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:44.203 16:05:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:44.203 16:05:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:44.203 16:05:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:44.203 16:05:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:44.203 16:05:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:45.137 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:45.137 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:45.137 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:45.137 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:45.137 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:45.137 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:45.137 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:45.137 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:45.137 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:45.137 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:45.137 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:45.137 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:45.137 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:45.137 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:45.137 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:45.137 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:46.085 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:28:46.085 00:28:46.085 real 0m14.347s 00:28:46.085 user 0m6.565s 00:28:46.085 sys 0m3.264s 00:28:46.085 16:05:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:46.085 16:05:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:46.085 ************************************ 00:28:46.085 END TEST kernel_target_abort 00:28:46.085 ************************************ 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:46.085 rmmod nvme_tcp 00:28:46.085 rmmod nvme_fabrics 00:28:46.085 rmmod nvme_keyring 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 884998 ']' 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 884998 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 884998 ']' 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 884998 00:28:46.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (884998) - No such process 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 884998 is not found' 00:28:46.085 Process with pid 884998 is not found 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:46.085 16:05:43 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:47.508 Waiting for block devices as requested 00:28:47.508 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:28:47.508 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:47.508 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:47.769 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:47.769 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:47.769 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:48.028 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:48.028 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:48.028 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:48.028 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:48.287 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:48.287 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:48.287 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:48.287 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:48.547 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:48.547 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:48.547 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:48.805 16:05:45 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:48.805 16:05:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:48.805 16:05:45 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:48.805 16:05:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:48.805 16:05:45 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.805 16:05:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:48.805 16:05:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.709 16:05:47 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:50.709 00:28:50.709 real 0m38.293s 00:28:50.709 user 1m2.169s 00:28:50.709 sys 0m9.593s 00:28:50.709 16:05:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:50.709 16:05:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:50.709 ************************************ 00:28:50.709 END TEST nvmf_abort_qd_sizes 00:28:50.709 ************************************ 00:28:50.709 16:05:47 -- common/autotest_common.sh@1142 -- # return 0 00:28:50.709 16:05:47 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:50.709 16:05:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:50.709 16:05:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:50.709 16:05:47 -- common/autotest_common.sh@10 -- # set +x 00:28:50.709 ************************************ 00:28:50.709 START TEST keyring_file 00:28:50.709 ************************************ 00:28:50.709 16:05:47 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:28:50.968 * Looking for test storage... 00:28:50.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:50.968 16:05:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:50.968 16:05:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:50.968 16:05:48 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:50.968 16:05:48 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:50.968 16:05:48 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:50.968 16:05:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.968 16:05:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.968 16:05:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.968 16:05:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:50.968 16:05:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:50.968 16:05:48 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:50.968 16:05:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:50.968 16:05:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:50.968 16:05:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:50.968 16:05:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:50.969 16:05:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:50.969 16:05:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:50.969 16:05:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VCQ01FXvWu 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:50.969 16:05:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:50.969 16:05:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:50.969 16:05:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:50.969 16:05:48 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:50.969 16:05:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:50.969 16:05:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VCQ01FXvWu 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VCQ01FXvWu 00:28:50.969 16:05:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.VCQ01FXvWu 00:28:50.969 16:05:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ax2p5ooHGC 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:50.969 16:05:48 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:50.969 16:05:48 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:50.969 16:05:48 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:50.969 16:05:48 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:50.969 16:05:48 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:50.969 16:05:48 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ax2p5ooHGC 00:28:50.969 16:05:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ax2p5ooHGC 00:28:50.969 16:05:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Ax2p5ooHGC 00:28:50.969 16:05:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=890781 00:28:50.969 16:05:48 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:50.969 16:05:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 890781 00:28:50.969 16:05:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 890781 ']' 00:28:50.969 16:05:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.969 16:05:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:50.969 16:05:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.969 16:05:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:50.969 16:05:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:50.969 [2024-07-12 16:05:48.172248] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:28:50.969 [2024-07-12 16:05:48.172332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890781 ] 00:28:50.969 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.969 [2024-07-12 16:05:48.232294] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.225 [2024-07-12 16:05:48.341080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:51.482 16:05:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:51.482 [2024-07-12 16:05:48.561261] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.482 null0 00:28:51.482 [2024-07-12 16:05:48.593287] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:51.482 [2024-07-12 16:05:48.593768] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:51.482 [2024-07-12 16:05:48.601304] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.482 16:05:48 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:51.482 [2024-07-12 16:05:48.609332] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:51.482 request: 00:28:51.482 { 00:28:51.482 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.482 "secure_channel": false, 00:28:51.482 "listen_address": { 00:28:51.482 "trtype": "tcp", 00:28:51.482 "traddr": "127.0.0.1", 00:28:51.482 "trsvcid": "4420" 00:28:51.482 }, 00:28:51.482 "method": "nvmf_subsystem_add_listener", 00:28:51.482 "req_id": 1 00:28:51.482 } 00:28:51.482 Got JSON-RPC error response 00:28:51.482 response: 00:28:51.482 { 00:28:51.482 "code": -32602, 00:28:51.482 "message": "Invalid parameters" 00:28:51.482 } 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:51.482 16:05:48 keyring_file -- keyring/file.sh@46 -- # bperfpid=890790 00:28:51.482 16:05:48 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:51.482 16:05:48 keyring_file -- keyring/file.sh@48 -- # waitforlisten 890790 /var/tmp/bperf.sock 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 890790 ']' 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:51.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:51.482 16:05:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:51.482 [2024-07-12 16:05:48.654696] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:28:51.482 [2024-07-12 16:05:48.654807] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890790 ] 00:28:51.482 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.482 [2024-07-12 16:05:48.710912] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.739 [2024-07-12 16:05:48.817975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.739 16:05:48 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:51.739 16:05:48 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:51.739 16:05:48 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VCQ01FXvWu 00:28:51.739 16:05:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VCQ01FXvWu 00:28:51.995 16:05:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Ax2p5ooHGC 00:28:51.995 16:05:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Ax2p5ooHGC 00:28:52.252 16:05:49 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:52.252 16:05:49 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:52.252 16:05:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:52.252 16:05:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:52.252 16:05:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:52.509 16:05:49 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.VCQ01FXvWu == \/\t\m\p\/\t\m\p\.\V\C\Q\0\1\F\X\v\W\u ]] 00:28:52.509 16:05:49 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:52.509 16:05:49 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:52.509 16:05:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:52.509 16:05:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:52.509 16:05:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:52.767 16:05:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Ax2p5ooHGC == \/\t\m\p\/\t\m\p\.\A\x\2\p\5\o\o\H\G\C ]] 00:28:52.767 16:05:49 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:52.767 16:05:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:52.767 16:05:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:52.767 16:05:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:52.767 16:05:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:52.767 16:05:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:53.024 16:05:50 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:53.024 16:05:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:53.024 16:05:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:53.024 16:05:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:53.024 16:05:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.024 16:05:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.024 16:05:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:53.282 16:05:50 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:53.282 16:05:50 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:53.282 16:05:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:53.541 [2024-07-12 16:05:50.649383] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:53.541 nvme0n1 00:28:53.541 16:05:50 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:53.541 16:05:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:53.541 16:05:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:53.541 16:05:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.541 16:05:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:53.541 16:05:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.799 16:05:50 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:53.799 16:05:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:53.799 16:05:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:53.799 16:05:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:53.799 16:05:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.799 16:05:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.799 16:05:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:54.057 16:05:51 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:54.057 16:05:51 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:54.057 Running I/O for 1 seconds... 00:28:55.432 00:28:55.432 Latency(us) 00:28:55.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.432 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:55.432 nvme0n1 : 1.01 10044.43 39.24 0.00 0.00 12694.81 6747.78 23884.23 00:28:55.432 =================================================================================================================== 00:28:55.432 Total : 10044.43 39.24 0.00 0.00 12694.81 6747.78 23884.23 00:28:55.432 0 00:28:55.432 16:05:52 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:55.432 16:05:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:55.432 16:05:52 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:55.432 16:05:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:55.432 16:05:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:55.432 16:05:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:55.432 16:05:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:55.432 16:05:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:55.702 16:05:52 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:55.702 16:05:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:55.702 16:05:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:55.702 16:05:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:55.702 16:05:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:55.702 16:05:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:55.702 16:05:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:55.964 16:05:53 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:55.964 16:05:53 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:55.964 16:05:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:55.964 16:05:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:55.964 16:05:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:55.964 16:05:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:55.964 16:05:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:55.964 16:05:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:55.964 16:05:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:55.964 16:05:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:56.222 [2024-07-12 16:05:53.341575] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:56.222 [2024-07-12 16:05:53.342310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d58a0 (107): Transport endpoint is not connected 00:28:56.222 [2024-07-12 16:05:53.343303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d58a0 (9): Bad file descriptor 00:28:56.222 [2024-07-12 16:05:53.344301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:56.222 [2024-07-12 16:05:53.344320] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:56.222 [2024-07-12 16:05:53.344354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:56.222 request: 00:28:56.222 { 00:28:56.222 "name": "nvme0", 00:28:56.222 "trtype": "tcp", 00:28:56.222 "traddr": "127.0.0.1", 00:28:56.222 "adrfam": "ipv4", 00:28:56.222 "trsvcid": "4420", 00:28:56.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.222 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:56.222 "prchk_reftag": false, 00:28:56.222 "prchk_guard": false, 00:28:56.222 "hdgst": false, 00:28:56.222 "ddgst": false, 00:28:56.222 "psk": "key1", 00:28:56.222 "method": "bdev_nvme_attach_controller", 00:28:56.222 "req_id": 1 00:28:56.222 } 00:28:56.222 Got JSON-RPC error response 00:28:56.222 response: 00:28:56.222 { 00:28:56.222 "code": -5, 00:28:56.222 "message": "Input/output error" 00:28:56.222 } 00:28:56.222 16:05:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:56.222 16:05:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:56.222 16:05:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:56.222 16:05:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:56.222 16:05:53 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:56.222 16:05:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:56.222 16:05:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:56.222 16:05:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:56.222 16:05:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.222 16:05:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:56.479 16:05:53 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:56.479 16:05:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:56.479 16:05:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:56.479 16:05:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:56.479 16:05:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:56.479 16:05:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.479 16:05:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:56.737 16:05:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:56.737 16:05:53 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:56.737 16:05:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:56.994 16:05:54 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:56.994 16:05:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:57.252 16:05:54 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:57.252 16:05:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:57.252 16:05:54 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:57.509 16:05:54 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:57.509 16:05:54 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.VCQ01FXvWu 00:28:57.509 16:05:54 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.VCQ01FXvWu 00:28:57.509 16:05:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:57.509 16:05:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.VCQ01FXvWu 00:28:57.509 16:05:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:57.509 16:05:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:57.509 16:05:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:57.509 16:05:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:57.509 16:05:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VCQ01FXvWu 00:28:57.509 16:05:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VCQ01FXvWu 00:28:57.766 [2024-07-12 16:05:54.815962] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.VCQ01FXvWu': 0100660 00:28:57.766 [2024-07-12 16:05:54.816006] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:57.766 request: 00:28:57.766 { 00:28:57.766 "name": "key0", 00:28:57.766 "path": "/tmp/tmp.VCQ01FXvWu", 00:28:57.766 "method": "keyring_file_add_key", 00:28:57.766 "req_id": 1 00:28:57.766 } 00:28:57.766 Got JSON-RPC error response 00:28:57.766 response: 00:28:57.766 { 00:28:57.766 "code": -1, 00:28:57.766 "message": "Operation not permitted" 00:28:57.766 } 00:28:57.766 16:05:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:57.766 16:05:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:57.766 16:05:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:57.766 16:05:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:57.766 16:05:54 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.VCQ01FXvWu 00:28:57.766 16:05:54 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VCQ01FXvWu 00:28:57.766 16:05:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VCQ01FXvWu 00:28:58.024 16:05:55 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.VCQ01FXvWu 00:28:58.024 16:05:55 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:58.024 16:05:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:58.024 16:05:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:58.024 16:05:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:58.024 16:05:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:58.024 16:05:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:58.281 16:05:55 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:58.281 16:05:55 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.281 16:05:55 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:58.281 16:05:55 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.281 16:05:55 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:58.281 16:05:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:58.281 16:05:55 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:58.281 16:05:55 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:58.281 16:05:55 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.281 16:05:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.281 [2024-07-12 16:05:55.561968] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.VCQ01FXvWu': No such file or directory 00:28:58.281 [2024-07-12 16:05:55.561998] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:58.281 [2024-07-12 16:05:55.562040] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:58.281 [2024-07-12 16:05:55.562051] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:58.281 [2024-07-12 16:05:55.562062] bdev_nvme.c:6273:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:58.281 request: 00:28:58.281 { 00:28:58.281 "name": "nvme0", 00:28:58.281 "trtype": "tcp", 00:28:58.281 "traddr": "127.0.0.1", 00:28:58.281 "adrfam": "ipv4", 00:28:58.281 "trsvcid": "4420", 00:28:58.281 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:58.281 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:58.281 "prchk_reftag": false, 00:28:58.281 "prchk_guard": false, 00:28:58.281 "hdgst": false, 00:28:58.281 "ddgst": false, 00:28:58.281 "psk": "key0", 00:28:58.281 "method": "bdev_nvme_attach_controller", 00:28:58.281 "req_id": 1 00:28:58.281 } 00:28:58.281 Got JSON-RPC error response 00:28:58.281 response: 00:28:58.281 { 00:28:58.281 "code": -19, 00:28:58.281 "message": "No such device" 00:28:58.281 } 00:28:58.538 16:05:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:58.538 16:05:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:58.538 16:05:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:58.538 16:05:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:58.538 16:05:55 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:58.538 16:05:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:58.795 16:05:55 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:58.795 16:05:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:58.795 16:05:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:58.795 16:05:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:58.795 16:05:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:58.795 16:05:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:58.795 16:05:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.i3BlAPTr2a 00:28:58.795 16:05:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:58.795 16:05:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:58.795 16:05:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:58.795 16:05:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:58.795 16:05:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:58.795 16:05:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:58.795 16:05:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:58.795 16:05:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.i3BlAPTr2a 00:28:58.795 16:05:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.i3BlAPTr2a 00:28:58.795 16:05:55 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.i3BlAPTr2a 00:28:58.795 16:05:55 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i3BlAPTr2a 00:28:58.795 16:05:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i3BlAPTr2a 00:28:59.052 16:05:56 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:59.052 16:05:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:59.309 nvme0n1 00:28:59.309 16:05:56 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:59.309 16:05:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:59.309 16:05:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:59.309 16:05:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.309 16:05:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.309 16:05:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:59.566 16:05:56 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:59.566 16:05:56 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:59.566 16:05:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:59.823 16:05:56 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:59.823 16:05:56 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:59.823 16:05:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.823 16:05:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:59.823 16:05:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.081 16:05:57 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:00.081 16:05:57 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:29:00.081 16:05:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:00.081 16:05:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:00.081 16:05:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:00.081 16:05:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:00.081 16:05:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.339 16:05:57 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:00.339 16:05:57 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:00.339 16:05:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:00.596 16:05:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:00.596 16:05:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:00.596 16:05:57 keyring_file -- keyring/file.sh@104 -- # jq length 00:29:00.854 16:05:57 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:00.854 16:05:57 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i3BlAPTr2a 00:29:00.854 16:05:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i3BlAPTr2a 00:29:01.112 16:05:58 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Ax2p5ooHGC 00:29:01.112 16:05:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Ax2p5ooHGC 00:29:01.369 16:05:58 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:01.369 16:05:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:01.627 nvme0n1 00:29:01.627 16:05:58 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:01.627 16:05:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:01.885 16:05:59 keyring_file -- keyring/file.sh@112 -- # config='{ 00:29:01.885 "subsystems": [ 00:29:01.885 { 00:29:01.885 "subsystem": "keyring", 00:29:01.885 "config": [ 00:29:01.885 { 00:29:01.885 "method": "keyring_file_add_key", 00:29:01.885 "params": { 00:29:01.885 "name": "key0", 00:29:01.885 "path": "/tmp/tmp.i3BlAPTr2a" 00:29:01.885 } 00:29:01.885 }, 00:29:01.885 { 00:29:01.885 "method": "keyring_file_add_key", 00:29:01.885 "params": { 00:29:01.885 "name": "key1", 00:29:01.885 "path": "/tmp/tmp.Ax2p5ooHGC" 00:29:01.885 } 00:29:01.885 } 00:29:01.885 ] 00:29:01.885 }, 00:29:01.885 { 00:29:01.885 "subsystem": "iobuf", 00:29:01.885 "config": [ 00:29:01.885 { 00:29:01.885 "method": "iobuf_set_options", 00:29:01.885 "params": { 00:29:01.885 "small_pool_count": 8192, 00:29:01.885 "large_pool_count": 1024, 00:29:01.885 "small_bufsize": 8192, 00:29:01.885 "large_bufsize": 135168 00:29:01.885 } 00:29:01.885 } 00:29:01.885 ] 00:29:01.885 }, 00:29:01.885 { 00:29:01.885 "subsystem": "sock", 00:29:01.885 "config": [ 00:29:01.885 { 00:29:01.885 "method": "sock_set_default_impl", 00:29:01.885 "params": { 00:29:01.885 "impl_name": "posix" 00:29:01.885 } 00:29:01.885 }, 00:29:01.885 { 00:29:01.885 "method": "sock_impl_set_options", 00:29:01.885 "params": { 00:29:01.885 "impl_name": "ssl", 00:29:01.885 "recv_buf_size": 4096, 00:29:01.885 "send_buf_size": 4096, 00:29:01.885 "enable_recv_pipe": true, 00:29:01.885 "enable_quickack": false, 00:29:01.885 "enable_placement_id": 0, 00:29:01.885 "enable_zerocopy_send_server": true, 00:29:01.885 "enable_zerocopy_send_client": false, 00:29:01.885 "zerocopy_threshold": 0, 00:29:01.885 "tls_version": 0, 00:29:01.885 "enable_ktls": false 00:29:01.885 } 00:29:01.885 }, 00:29:01.885 { 00:29:01.885 "method": "sock_impl_set_options", 00:29:01.885 "params": { 00:29:01.885 "impl_name": "posix", 00:29:01.885 "recv_buf_size": 2097152, 00:29:01.885 "send_buf_size": 2097152, 00:29:01.885 "enable_recv_pipe": true, 00:29:01.885 "enable_quickack": false, 00:29:01.885 "enable_placement_id": 0, 00:29:01.885 "enable_zerocopy_send_server": true, 00:29:01.885 "enable_zerocopy_send_client": false, 00:29:01.885 "zerocopy_threshold": 0, 00:29:01.885 "tls_version": 0, 00:29:01.885 "enable_ktls": false 00:29:01.885 } 00:29:01.885 } 00:29:01.885 ] 00:29:01.885 }, 00:29:01.885 { 00:29:01.885 "subsystem": "vmd", 00:29:01.885 "config": [] 00:29:01.886 }, 00:29:01.886 { 00:29:01.886 "subsystem": "accel", 00:29:01.886 "config": [ 00:29:01.886 { 00:29:01.886 "method": "accel_set_options", 00:29:01.886 "params": { 00:29:01.886 "small_cache_size": 128, 00:29:01.886 "large_cache_size": 16, 00:29:01.886 "task_count": 2048, 00:29:01.886 "sequence_count": 2048, 00:29:01.886 "buf_count": 2048 00:29:01.886 } 00:29:01.886 } 00:29:01.886 ] 00:29:01.886 }, 00:29:01.886 { 00:29:01.886 "subsystem": "bdev", 00:29:01.886 "config": [ 00:29:01.886 { 00:29:01.886 "method": "bdev_set_options", 00:29:01.886 "params": { 00:29:01.886 "bdev_io_pool_size": 65535, 00:29:01.886 "bdev_io_cache_size": 256, 00:29:01.886 "bdev_auto_examine": true, 00:29:01.886 "iobuf_small_cache_size": 128, 00:29:01.886 "iobuf_large_cache_size": 16 00:29:01.886 } 00:29:01.886 }, 00:29:01.886 { 00:29:01.886 "method": "bdev_raid_set_options", 00:29:01.886 "params": { 00:29:01.886 "process_window_size_kb": 1024 00:29:01.886 } 00:29:01.886 }, 00:29:01.886 { 00:29:01.886 "method": "bdev_iscsi_set_options", 00:29:01.886 "params": { 00:29:01.886 "timeout_sec": 30 00:29:01.886 } 00:29:01.886 }, 00:29:01.886 { 00:29:01.886 "method": "bdev_nvme_set_options", 00:29:01.886 "params": { 00:29:01.886 "action_on_timeout": "none", 00:29:01.886 "timeout_us": 0, 00:29:01.886 "timeout_admin_us": 0, 00:29:01.886 "keep_alive_timeout_ms": 10000, 00:29:01.886 "arbitration_burst": 0, 00:29:01.886 "low_priority_weight": 0, 00:29:01.886 "medium_priority_weight": 0, 00:29:01.886 "high_priority_weight": 0, 00:29:01.886 "nvme_adminq_poll_period_us": 10000, 00:29:01.886 "nvme_ioq_poll_period_us": 0, 00:29:01.886 "io_queue_requests": 512, 00:29:01.886 "delay_cmd_submit": true, 00:29:01.886 "transport_retry_count": 4, 00:29:01.886 "bdev_retry_count": 3, 00:29:01.886 "transport_ack_timeout": 0, 00:29:01.886 "ctrlr_loss_timeout_sec": 0, 00:29:01.886 "reconnect_delay_sec": 0, 00:29:01.886 "fast_io_fail_timeout_sec": 0, 00:29:01.886 "disable_auto_failback": false, 00:29:01.886 "generate_uuids": false, 00:29:01.886 "transport_tos": 0, 00:29:01.886 "nvme_error_stat": false, 00:29:01.886 "rdma_srq_size": 0, 00:29:01.886 "io_path_stat": false, 00:29:01.886 "allow_accel_sequence": false, 00:29:01.886 "rdma_max_cq_size": 0, 00:29:01.886 "rdma_cm_event_timeout_ms": 0, 00:29:01.886 "dhchap_digests": [ 00:29:01.886 "sha256", 00:29:01.886 "sha384", 00:29:01.886 "sha512" 00:29:01.886 ], 00:29:01.886 "dhchap_dhgroups": [ 00:29:01.886 "null", 00:29:01.886 "ffdhe2048", 00:29:01.886 "ffdhe3072", 00:29:01.886 "ffdhe4096", 00:29:01.886 "ffdhe6144", 00:29:01.886 "ffdhe8192" 00:29:01.886 ] 00:29:01.886 } 00:29:01.886 }, 00:29:01.886 { 00:29:01.886 "method": "bdev_nvme_attach_controller", 00:29:01.886 "params": { 00:29:01.886 "name": "nvme0", 00:29:01.886 "trtype": "TCP", 00:29:01.886 "adrfam": "IPv4", 00:29:01.886 "traddr": "127.0.0.1", 00:29:01.886 "trsvcid": "4420", 00:29:01.886 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.886 "prchk_reftag": false, 00:29:01.886 "prchk_guard": false, 00:29:01.886 "ctrlr_loss_timeout_sec": 0, 00:29:01.886 "reconnect_delay_sec": 0, 00:29:01.886 "fast_io_fail_timeout_sec": 0, 00:29:01.886 "psk": "key0", 00:29:01.886 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:01.886 "hdgst": false, 00:29:01.886 "ddgst": false 00:29:01.886 } 00:29:01.886 }, 00:29:01.886 { 00:29:01.886 "method": "bdev_nvme_set_hotplug", 00:29:01.886 "params": { 00:29:01.886 "period_us": 100000, 00:29:01.886 "enable": false 00:29:01.886 } 00:29:01.886 }, 00:29:01.886 { 00:29:01.886 "method": "bdev_wait_for_examine" 00:29:01.886 } 00:29:01.886 ] 00:29:01.886 }, 00:29:01.886 { 00:29:01.886 "subsystem": "nbd", 00:29:01.886 "config": [] 00:29:01.886 } 00:29:01.886 ] 00:29:01.886 }' 00:29:01.886 16:05:59 keyring_file -- keyring/file.sh@114 -- # killprocess 890790 00:29:01.886 16:05:59 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 890790 ']' 00:29:01.886 16:05:59 keyring_file -- common/autotest_common.sh@952 -- # kill -0 890790 00:29:01.886 16:05:59 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:01.886 16:05:59 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:01.886 16:05:59 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 890790 00:29:01.886 16:05:59 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:01.886 16:05:59 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:01.886 16:05:59 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 890790' 00:29:01.886 killing process with pid 890790 00:29:01.886 16:05:59 keyring_file -- common/autotest_common.sh@967 -- # kill 890790 00:29:01.886 Received shutdown signal, test time was about 1.000000 seconds 00:29:01.886 00:29:01.886 Latency(us) 00:29:01.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.886 =================================================================================================================== 00:29:01.886 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.886 16:05:59 keyring_file -- common/autotest_common.sh@972 -- # wait 890790 00:29:02.144 16:05:59 keyring_file -- keyring/file.sh@117 -- # bperfpid=892244 00:29:02.145 16:05:59 keyring_file -- keyring/file.sh@119 -- # waitforlisten 892244 /var/tmp/bperf.sock 00:29:02.145 16:05:59 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 892244 ']' 00:29:02.145 16:05:59 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:02.145 16:05:59 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:02.145 16:05:59 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:02.145 16:05:59 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:02.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:02.145 16:05:59 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:29:02.145 "subsystems": [ 00:29:02.145 { 00:29:02.145 "subsystem": "keyring", 00:29:02.145 "config": [ 00:29:02.145 { 00:29:02.145 "method": "keyring_file_add_key", 00:29:02.145 "params": { 00:29:02.145 "name": "key0", 00:29:02.145 "path": "/tmp/tmp.i3BlAPTr2a" 00:29:02.145 } 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "method": "keyring_file_add_key", 00:29:02.145 "params": { 00:29:02.145 "name": "key1", 00:29:02.145 "path": "/tmp/tmp.Ax2p5ooHGC" 00:29:02.145 } 00:29:02.145 } 00:29:02.145 ] 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "subsystem": "iobuf", 00:29:02.145 "config": [ 00:29:02.145 { 00:29:02.145 "method": "iobuf_set_options", 00:29:02.145 "params": { 00:29:02.145 "small_pool_count": 8192, 00:29:02.145 "large_pool_count": 1024, 00:29:02.145 "small_bufsize": 8192, 00:29:02.145 "large_bufsize": 135168 00:29:02.145 } 00:29:02.145 } 00:29:02.145 ] 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "subsystem": "sock", 00:29:02.145 "config": [ 00:29:02.145 { 00:29:02.145 "method": "sock_set_default_impl", 00:29:02.145 "params": { 00:29:02.145 "impl_name": "posix" 00:29:02.145 } 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "method": "sock_impl_set_options", 00:29:02.145 "params": { 00:29:02.145 "impl_name": "ssl", 00:29:02.145 "recv_buf_size": 4096, 00:29:02.145 "send_buf_size": 4096, 00:29:02.145 "enable_recv_pipe": true, 00:29:02.145 "enable_quickack": false, 00:29:02.145 "enable_placement_id": 0, 00:29:02.145 "enable_zerocopy_send_server": true, 00:29:02.145 "enable_zerocopy_send_client": false, 00:29:02.145 "zerocopy_threshold": 0, 00:29:02.145 "tls_version": 0, 00:29:02.145 "enable_ktls": false 00:29:02.145 } 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "method": "sock_impl_set_options", 00:29:02.145 "params": { 00:29:02.145 "impl_name": "posix", 00:29:02.145 "recv_buf_size": 2097152, 00:29:02.145 "send_buf_size": 2097152, 00:29:02.145 "enable_recv_pipe": true, 00:29:02.145 "enable_quickack": false, 00:29:02.145 "enable_placement_id": 0, 00:29:02.145 "enable_zerocopy_send_server": true, 00:29:02.145 "enable_zerocopy_send_client": false, 00:29:02.145 "zerocopy_threshold": 0, 00:29:02.145 "tls_version": 0, 00:29:02.145 "enable_ktls": false 00:29:02.145 } 00:29:02.145 } 00:29:02.145 ] 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "subsystem": "vmd", 00:29:02.145 "config": [] 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "subsystem": "accel", 00:29:02.145 "config": [ 00:29:02.145 { 00:29:02.145 "method": "accel_set_options", 00:29:02.145 "params": { 00:29:02.145 "small_cache_size": 128, 00:29:02.145 "large_cache_size": 16, 00:29:02.145 "task_count": 2048, 00:29:02.145 "sequence_count": 2048, 00:29:02.145 "buf_count": 2048 00:29:02.145 } 00:29:02.145 } 00:29:02.145 ] 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "subsystem": "bdev", 00:29:02.145 "config": [ 00:29:02.145 { 00:29:02.145 "method": "bdev_set_options", 00:29:02.145 "params": { 00:29:02.145 "bdev_io_pool_size": 65535, 00:29:02.145 "bdev_io_cache_size": 256, 00:29:02.145 "bdev_auto_examine": true, 00:29:02.145 "iobuf_small_cache_size": 128, 00:29:02.145 "iobuf_large_cache_size": 16 00:29:02.145 } 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "method": "bdev_raid_set_options", 00:29:02.145 "params": { 00:29:02.145 "process_window_size_kb": 1024 00:29:02.145 } 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "method": "bdev_iscsi_set_options", 00:29:02.145 "params": { 00:29:02.145 "timeout_sec": 30 00:29:02.145 } 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "method": "bdev_nvme_set_options", 00:29:02.145 "params": { 00:29:02.145 "action_on_timeout": "none", 00:29:02.145 "timeout_us": 0, 00:29:02.145 "timeout_admin_us": 0, 00:29:02.145 "keep_alive_timeout_ms": 10000, 00:29:02.145 "arbitration_burst": 0, 00:29:02.145 "low_priority_weight": 0, 00:29:02.145 "medium_priority_weight": 0, 00:29:02.145 "high_priority_weight": 0, 00:29:02.145 "nvme_adminq_poll_period_us": 10000, 00:29:02.145 "nvme_ioq_poll_period_us": 0, 00:29:02.145 "io_queue_requests": 512, 00:29:02.145 "delay_cmd_submit": true, 00:29:02.145 "transport_retry_count": 4, 00:29:02.145 "bdev_retry_count": 3, 00:29:02.145 "transport_ack_timeout": 0, 00:29:02.145 "ctrlr_loss_timeout_sec": 0, 00:29:02.145 "reconnect_delay_sec": 0, 00:29:02.145 "fast_io_fail_timeout_sec": 0, 00:29:02.145 "disable_auto_failback": false, 00:29:02.145 "generate_uuids": false, 00:29:02.145 "transport_tos": 0, 00:29:02.145 "nvme_error_stat": false, 00:29:02.145 "rdma_srq_size": 0, 00:29:02.145 "io_path_stat": false, 00:29:02.145 "allow_accel_sequence": false, 00:29:02.145 "rdma_max_cq_size": 0, 00:29:02.145 "rdma_cm_event_timeout_ms": 0, 00:29:02.145 "dhchap_digests": [ 00:29:02.145 "sha256", 00:29:02.145 "sha384", 00:29:02.145 "sha512" 00:29:02.145 ], 00:29:02.145 "dhchap_dhgroups": [ 00:29:02.145 "null", 00:29:02.145 "ffdhe2048", 00:29:02.145 "ffdhe3072", 00:29:02.145 "ffdhe4096", 00:29:02.145 "ffdhe6144", 00:29:02.145 "ffdhe8192" 00:29:02.145 ] 00:29:02.145 } 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "method": "bdev_nvme_attach_controller", 00:29:02.145 "params": { 00:29:02.145 "name": "nvme0", 00:29:02.145 "trtype": "TCP", 00:29:02.145 "adrfam": "IPv4", 00:29:02.145 "traddr": "127.0.0.1", 00:29:02.145 "trsvcid": "4420", 00:29:02.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:02.145 "prchk_reftag": false, 00:29:02.145 "prchk_guard": false, 00:29:02.145 "ctrlr_loss_timeout_sec": 0, 00:29:02.145 "reconnect_delay_sec": 0, 00:29:02.145 "fast_io_fail_timeout_sec": 0, 00:29:02.145 "psk": "key0", 00:29:02.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:02.145 "hdgst": false, 00:29:02.145 "ddgst": false 00:29:02.145 } 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "method": "bdev_nvme_set_hotplug", 00:29:02.145 "params": { 00:29:02.145 "period_us": 100000, 00:29:02.145 "enable": false 00:29:02.145 } 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "method": "bdev_wait_for_examine" 00:29:02.145 } 00:29:02.145 ] 00:29:02.145 }, 00:29:02.145 { 00:29:02.145 "subsystem": "nbd", 00:29:02.145 "config": [] 00:29:02.145 } 00:29:02.145 ] 00:29:02.145 }' 00:29:02.145 16:05:59 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:02.145 16:05:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:02.145 [2024-07-12 16:05:59.364363] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:29:02.145 [2024-07-12 16:05:59.364444] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892244 ] 00:29:02.145 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.145 [2024-07-12 16:05:59.422329] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.403 [2024-07-12 16:05:59.531976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.660 [2024-07-12 16:05:59.710924] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:03.225 16:06:00 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:03.225 16:06:00 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:29:03.225 16:06:00 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:03.225 16:06:00 keyring_file -- keyring/file.sh@120 -- # jq length 00:29:03.225 16:06:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.482 16:06:00 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:03.482 16:06:00 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:29:03.482 16:06:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:03.482 16:06:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:03.482 16:06:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.482 16:06:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.482 16:06:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:03.740 16:06:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:03.740 16:06:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:29:03.740 16:06:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:03.740 16:06:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:03.740 16:06:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.740 16:06:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.740 16:06:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:03.997 16:06:01 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:03.997 16:06:01 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:03.997 16:06:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:03.997 16:06:01 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:04.255 16:06:01 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:04.255 16:06:01 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:04.255 16:06:01 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.i3BlAPTr2a /tmp/tmp.Ax2p5ooHGC 00:29:04.255 16:06:01 keyring_file -- keyring/file.sh@20 -- # killprocess 892244 00:29:04.255 16:06:01 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 892244 ']' 00:29:04.255 16:06:01 keyring_file -- common/autotest_common.sh@952 -- # kill -0 892244 00:29:04.255 16:06:01 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:04.255 16:06:01 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:04.255 16:06:01 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 892244 00:29:04.255 16:06:01 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:04.255 16:06:01 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:04.255 16:06:01 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 892244' 00:29:04.255 killing process with pid 892244 00:29:04.255 16:06:01 keyring_file -- common/autotest_common.sh@967 -- # kill 892244 00:29:04.255 Received shutdown signal, test time was about 1.000000 seconds 00:29:04.255 00:29:04.255 Latency(us) 00:29:04.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.255 =================================================================================================================== 00:29:04.255 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:04.255 16:06:01 keyring_file -- common/autotest_common.sh@972 -- # wait 892244 00:29:04.512 16:06:01 keyring_file -- keyring/file.sh@21 -- # killprocess 890781 00:29:04.512 16:06:01 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 890781 ']' 00:29:04.512 16:06:01 keyring_file -- common/autotest_common.sh@952 -- # kill -0 890781 00:29:04.512 16:06:01 keyring_file -- common/autotest_common.sh@953 -- # uname 00:29:04.512 16:06:01 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:04.512 16:06:01 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 890781 00:29:04.512 16:06:01 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:04.512 16:06:01 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:04.512 16:06:01 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 890781' 00:29:04.512 killing process with pid 890781 00:29:04.512 16:06:01 keyring_file -- common/autotest_common.sh@967 -- # kill 890781 00:29:04.512 [2024-07-12 16:06:01.680104] app.c:1028:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:04.512 16:06:01 keyring_file -- common/autotest_common.sh@972 -- # wait 890781 00:29:05.077 00:29:05.077 real 0m14.128s 00:29:05.077 user 0m35.422s 00:29:05.077 sys 0m3.234s 00:29:05.077 16:06:02 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:05.077 16:06:02 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:05.077 ************************************ 00:29:05.077 END TEST keyring_file 00:29:05.077 ************************************ 00:29:05.077 16:06:02 -- common/autotest_common.sh@1142 -- # return 0 00:29:05.077 16:06:02 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:29:05.077 16:06:02 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:05.077 16:06:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:05.077 16:06:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.077 16:06:02 -- common/autotest_common.sh@10 -- # set +x 00:29:05.077 ************************************ 00:29:05.077 START TEST keyring_linux 00:29:05.077 ************************************ 00:29:05.077 16:06:02 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:29:05.077 * Looking for test storage... 00:29:05.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:05.077 16:06:02 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:05.077 16:06:02 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.077 16:06:02 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.077 16:06:02 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.077 16:06:02 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.077 16:06:02 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.077 16:06:02 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.077 16:06:02 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.077 16:06:02 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.077 16:06:02 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:05.077 16:06:02 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:05.078 16:06:02 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:05.078 16:06:02 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:05.078 16:06:02 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:05.078 16:06:02 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:05.078 16:06:02 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:05.078 16:06:02 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:05.078 /tmp/:spdk-test:key0 00:29:05.078 16:06:02 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:29:05.078 16:06:02 keyring_linux -- nvmf/common.sh@705 -- # python - 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:05.078 16:06:02 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:05.078 /tmp/:spdk-test:key1 00:29:05.078 16:06:02 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=892724 00:29:05.078 16:06:02 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:05.078 16:06:02 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 892724 00:29:05.078 16:06:02 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 892724 ']' 00:29:05.078 16:06:02 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.078 16:06:02 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:05.078 16:06:02 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.078 16:06:02 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:05.078 16:06:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:05.078 [2024-07-12 16:06:02.351078] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:29:05.078 [2024-07-12 16:06:02.351173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892724 ] 00:29:05.337 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.337 [2024-07-12 16:06:02.410157] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.337 [2024-07-12 16:06:02.522895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.595 16:06:02 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:05.595 16:06:02 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:05.595 16:06:02 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:05.595 16:06:02 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:05.595 16:06:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:05.595 [2024-07-12 16:06:02.778616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.595 null0 00:29:05.595 [2024-07-12 16:06:02.810669] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:05.595 [2024-07-12 16:06:02.811236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:05.595 16:06:02 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:05.595 16:06:02 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:05.595 916300587 00:29:05.595 16:06:02 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:05.595 431941783 00:29:05.595 16:06:02 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=892850 00:29:05.595 16:06:02 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:05.595 16:06:02 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 892850 /var/tmp/bperf.sock 00:29:05.595 16:06:02 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 892850 ']' 00:29:05.595 16:06:02 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.595 16:06:02 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:05.595 16:06:02 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.595 16:06:02 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:05.595 16:06:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:05.595 [2024-07-12 16:06:02.874501] Starting SPDK v24.09-pre git sha1 25161080d / DPDK 24.03.0 initialization... 00:29:05.595 [2024-07-12 16:06:02.874584] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892850 ] 00:29:05.853 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.853 [2024-07-12 16:06:02.931902] app.c: 913:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.853 [2024-07-12 16:06:03.038376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.853 16:06:03 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:05.853 16:06:03 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:29:05.853 16:06:03 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:05.853 16:06:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:06.109 16:06:03 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:06.109 16:06:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:06.706 16:06:03 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:06.706 16:06:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:06.706 [2024-07-12 16:06:03.914116] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:06.988 nvme0n1 00:29:06.988 16:06:03 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:06.988 16:06:03 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:06.988 16:06:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:06.988 16:06:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:06.988 16:06:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:06.988 16:06:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.988 16:06:04 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:06.988 16:06:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:06.988 16:06:04 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:06.988 16:06:04 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:06.988 16:06:04 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.988 16:06:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.988 16:06:04 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:07.246 16:06:04 keyring_linux -- keyring/linux.sh@25 -- # sn=916300587 00:29:07.246 16:06:04 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:07.246 16:06:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:07.246 16:06:04 keyring_linux -- keyring/linux.sh@26 -- # [[ 916300587 == \9\1\6\3\0\0\5\8\7 ]] 00:29:07.246 16:06:04 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 916300587 00:29:07.246 16:06:04 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:07.246 16:06:04 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:07.504 Running I/O for 1 seconds... 00:29:08.437 00:29:08.437 Latency(us) 00:29:08.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.438 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:08.438 nvme0n1 : 1.01 10861.47 42.43 0.00 0.00 11707.55 5509.88 17282.09 00:29:08.438 =================================================================================================================== 00:29:08.438 Total : 10861.47 42.43 0.00 0.00 11707.55 5509.88 17282.09 00:29:08.438 0 00:29:08.438 16:06:05 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:08.438 16:06:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:08.696 16:06:05 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:08.696 16:06:05 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:08.696 16:06:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:08.696 16:06:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:08.696 16:06:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:08.696 16:06:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.954 16:06:06 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:08.954 16:06:06 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:08.954 16:06:06 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:08.954 16:06:06 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:08.954 16:06:06 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:29:08.954 16:06:06 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:08.954 16:06:06 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:29:08.954 16:06:06 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:08.954 16:06:06 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:29:08.954 16:06:06 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:08.954 16:06:06 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:08.954 16:06:06 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:09.213 [2024-07-12 16:06:06.414261] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:09.213 [2024-07-12 16:06:06.414530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1163820 (107): Transport endpoint is not connected 00:29:09.213 [2024-07-12 16:06:06.415523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1163820 (9): Bad file descriptor 00:29:09.213 [2024-07-12 16:06:06.416523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:09.213 [2024-07-12 16:06:06.416541] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:09.213 [2024-07-12 16:06:06.416569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:09.213 request: 00:29:09.213 { 00:29:09.213 "name": "nvme0", 00:29:09.213 "trtype": "tcp", 00:29:09.213 "traddr": "127.0.0.1", 00:29:09.213 "adrfam": "ipv4", 00:29:09.213 "trsvcid": "4420", 00:29:09.213 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:09.213 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:09.213 "prchk_reftag": false, 00:29:09.213 "prchk_guard": false, 00:29:09.213 "hdgst": false, 00:29:09.213 "ddgst": false, 00:29:09.213 "psk": ":spdk-test:key1", 00:29:09.213 "method": "bdev_nvme_attach_controller", 00:29:09.213 "req_id": 1 00:29:09.213 } 00:29:09.213 Got JSON-RPC error response 00:29:09.213 response: 00:29:09.213 { 00:29:09.213 "code": -5, 00:29:09.213 "message": "Input/output error" 00:29:09.213 } 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@33 -- # sn=916300587 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 916300587 00:29:09.213 1 links removed 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@33 -- # sn=431941783 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 431941783 00:29:09.213 1 links removed 00:29:09.213 16:06:06 keyring_linux -- keyring/linux.sh@41 -- # killprocess 892850 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 892850 ']' 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 892850 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 892850 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 892850' 00:29:09.213 killing process with pid 892850 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@967 -- # kill 892850 00:29:09.213 Received shutdown signal, test time was about 1.000000 seconds 00:29:09.213 00:29:09.213 Latency(us) 00:29:09.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.213 =================================================================================================================== 00:29:09.213 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.213 16:06:06 keyring_linux -- common/autotest_common.sh@972 -- # wait 892850 00:29:09.471 16:06:06 keyring_linux -- keyring/linux.sh@42 -- # killprocess 892724 00:29:09.471 16:06:06 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 892724 ']' 00:29:09.471 16:06:06 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 892724 00:29:09.471 16:06:06 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:29:09.471 16:06:06 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:09.471 16:06:06 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 892724 00:29:09.471 16:06:06 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:09.471 16:06:06 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:09.471 16:06:06 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 892724' 00:29:09.471 killing process with pid 892724 00:29:09.471 16:06:06 keyring_linux -- common/autotest_common.sh@967 -- # kill 892724 00:29:09.471 16:06:06 keyring_linux -- common/autotest_common.sh@972 -- # wait 892724 00:29:10.037 00:29:10.037 real 0m5.020s 00:29:10.037 user 0m9.766s 00:29:10.037 sys 0m1.628s 00:29:10.037 16:06:07 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:10.037 16:06:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:10.037 ************************************ 00:29:10.037 END TEST keyring_linux 00:29:10.037 ************************************ 00:29:10.037 16:06:07 -- common/autotest_common.sh@1142 -- # return 0 00:29:10.037 16:06:07 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:29:10.037 16:06:07 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:29:10.037 16:06:07 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:29:10.037 16:06:07 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:29:10.037 16:06:07 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:29:10.037 16:06:07 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:29:10.037 16:06:07 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:29:10.037 16:06:07 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:29:10.037 16:06:07 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:29:10.037 16:06:07 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:29:10.037 16:06:07 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:29:10.037 16:06:07 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:29:10.037 16:06:07 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:29:10.037 16:06:07 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:29:10.037 16:06:07 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:29:10.037 16:06:07 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:29:10.037 16:06:07 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:29:10.037 16:06:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:10.037 16:06:07 -- common/autotest_common.sh@10 -- # set +x 00:29:10.037 16:06:07 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:29:10.037 16:06:07 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:29:10.037 16:06:07 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:29:10.037 16:06:07 -- common/autotest_common.sh@10 -- # set +x 00:29:11.938 INFO: APP EXITING 00:29:11.938 INFO: killing all VMs 00:29:11.938 INFO: killing vhost app 00:29:11.938 INFO: EXIT DONE 00:29:12.873 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:29:12.873 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:12.873 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:12.873 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:12.873 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:12.873 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:12.873 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:13.132 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:13.132 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:13.132 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:13.132 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:13.132 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:13.132 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:13.132 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:13.132 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:13.132 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:13.132 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:14.511 Cleaning 00:29:14.511 Removing: /var/run/dpdk/spdk0/config 00:29:14.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:14.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:14.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:14.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:14.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:14.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:14.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:14.511 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:14.511 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:14.511 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:14.511 Removing: /var/run/dpdk/spdk1/config 00:29:14.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:14.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:14.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:14.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:14.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:14.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:14.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:14.511 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:14.511 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:14.511 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:14.511 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:14.511 Removing: /var/run/dpdk/spdk2/config 00:29:14.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:14.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:14.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:14.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:14.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:14.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:14.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:14.511 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:14.511 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:14.511 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:14.511 Removing: /var/run/dpdk/spdk3/config 00:29:14.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:14.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:14.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:14.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:14.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:14.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:14.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:14.511 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:14.511 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:14.511 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:14.511 Removing: /var/run/dpdk/spdk4/config 00:29:14.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:14.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:14.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:14.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:14.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:14.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:14.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:14.511 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:14.511 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:14.511 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:14.511 Removing: /dev/shm/bdev_svc_trace.1 00:29:14.511 Removing: /dev/shm/nvmf_trace.0 00:29:14.511 Removing: /dev/shm/spdk_tgt_trace.pid633593 00:29:14.511 Removing: /var/run/dpdk/spdk0 00:29:14.511 Removing: /var/run/dpdk/spdk1 00:29:14.511 Removing: /var/run/dpdk/spdk2 00:29:14.511 Removing: /var/run/dpdk/spdk3 00:29:14.511 Removing: /var/run/dpdk/spdk4 00:29:14.511 Removing: /var/run/dpdk/spdk_pid631356 00:29:14.511 Removing: /var/run/dpdk/spdk_pid632250 00:29:14.511 Removing: /var/run/dpdk/spdk_pid633593 00:29:14.511 Removing: /var/run/dpdk/spdk_pid634020 00:29:14.511 Removing: /var/run/dpdk/spdk_pid634601 00:29:14.511 Removing: /var/run/dpdk/spdk_pid634741 00:29:14.511 Removing: /var/run/dpdk/spdk_pid635474 00:29:14.511 Removing: /var/run/dpdk/spdk_pid635590 00:29:14.511 Removing: /var/run/dpdk/spdk_pid635828 00:29:14.511 Removing: /var/run/dpdk/spdk_pid637021 00:29:14.511 Removing: /var/run/dpdk/spdk_pid638056 00:29:14.512 Removing: /var/run/dpdk/spdk_pid638251 00:29:14.512 Removing: /var/run/dpdk/spdk_pid638558 00:29:14.512 Removing: /var/run/dpdk/spdk_pid638767 00:29:14.512 Removing: /var/run/dpdk/spdk_pid638955 00:29:14.512 Removing: /var/run/dpdk/spdk_pid639112 00:29:14.512 Removing: /var/run/dpdk/spdk_pid639264 00:29:14.512 Removing: /var/run/dpdk/spdk_pid639450 00:29:14.512 Removing: /var/run/dpdk/spdk_pid639766 00:29:14.512 Removing: /var/run/dpdk/spdk_pid642131 00:29:14.512 Removing: /var/run/dpdk/spdk_pid642301 00:29:14.512 Removing: /var/run/dpdk/spdk_pid642461 00:29:14.512 Removing: /var/run/dpdk/spdk_pid642584 00:29:14.512 Removing: /var/run/dpdk/spdk_pid642895 00:29:14.512 Removing: /var/run/dpdk/spdk_pid643024 00:29:14.512 Removing: /var/run/dpdk/spdk_pid643329 00:29:14.512 Removing: /var/run/dpdk/spdk_pid643457 00:29:14.512 Removing: /var/run/dpdk/spdk_pid643622 00:29:14.512 Removing: /var/run/dpdk/spdk_pid643642 00:29:14.512 Removing: /var/run/dpdk/spdk_pid643921 00:29:14.512 Removing: /var/run/dpdk/spdk_pid643944 00:29:14.512 Removing: /var/run/dpdk/spdk_pid644429 00:29:14.512 Removing: /var/run/dpdk/spdk_pid644581 00:29:14.512 Removing: /var/run/dpdk/spdk_pid644778 00:29:14.512 Removing: /var/run/dpdk/spdk_pid644950 00:29:14.512 Removing: /var/run/dpdk/spdk_pid645093 00:29:14.512 Removing: /var/run/dpdk/spdk_pid645159 00:29:14.512 Removing: /var/run/dpdk/spdk_pid645368 00:29:14.512 Removing: /var/run/dpdk/spdk_pid645597 00:29:14.512 Removing: /var/run/dpdk/spdk_pid645750 00:29:14.512 Removing: /var/run/dpdk/spdk_pid645906 00:29:14.512 Removing: /var/run/dpdk/spdk_pid646184 00:29:14.512 Removing: /var/run/dpdk/spdk_pid646336 00:29:14.512 Removing: /var/run/dpdk/spdk_pid646497 00:29:14.512 Removing: /var/run/dpdk/spdk_pid646766 00:29:14.512 Removing: /var/run/dpdk/spdk_pid646930 00:29:14.771 Removing: /var/run/dpdk/spdk_pid647081 00:29:14.771 Removing: /var/run/dpdk/spdk_pid647312 00:29:14.771 Removing: /var/run/dpdk/spdk_pid647515 00:29:14.771 Removing: /var/run/dpdk/spdk_pid647675 00:29:14.771 Removing: /var/run/dpdk/spdk_pid647846 00:29:14.771 Removing: /var/run/dpdk/spdk_pid648105 00:29:14.771 Removing: /var/run/dpdk/spdk_pid648261 00:29:14.771 Removing: /var/run/dpdk/spdk_pid648424 00:29:14.771 Removing: /var/run/dpdk/spdk_pid648697 00:29:14.771 Removing: /var/run/dpdk/spdk_pid648863 00:29:14.771 Removing: /var/run/dpdk/spdk_pid649028 00:29:14.771 Removing: /var/run/dpdk/spdk_pid649207 00:29:14.771 Removing: /var/run/dpdk/spdk_pid649413 00:29:14.771 Removing: /var/run/dpdk/spdk_pid651602 00:29:14.771 Removing: /var/run/dpdk/spdk_pid678243 00:29:14.771 Removing: /var/run/dpdk/spdk_pid680854 00:29:14.771 Removing: /var/run/dpdk/spdk_pid687724 00:29:14.771 Removing: /var/run/dpdk/spdk_pid690928 00:29:14.771 Removing: /var/run/dpdk/spdk_pid693287 00:29:14.771 Removing: /var/run/dpdk/spdk_pid693692 00:29:14.771 Removing: /var/run/dpdk/spdk_pid697807 00:29:14.771 Removing: /var/run/dpdk/spdk_pid702175 00:29:14.771 Removing: /var/run/dpdk/spdk_pid702181 00:29:14.771 Removing: /var/run/dpdk/spdk_pid702843 00:29:14.771 Removing: /var/run/dpdk/spdk_pid703469 00:29:14.771 Removing: /var/run/dpdk/spdk_pid704041 00:29:14.771 Removing: /var/run/dpdk/spdk_pid704441 00:29:14.771 Removing: /var/run/dpdk/spdk_pid704455 00:29:14.771 Removing: /var/run/dpdk/spdk_pid704700 00:29:14.771 Removing: /var/run/dpdk/spdk_pid704837 00:29:14.771 Removing: /var/run/dpdk/spdk_pid704839 00:29:14.771 Removing: /var/run/dpdk/spdk_pid705491 00:29:14.771 Removing: /var/run/dpdk/spdk_pid706036 00:29:14.771 Removing: /var/run/dpdk/spdk_pid706696 00:29:14.771 Removing: /var/run/dpdk/spdk_pid707091 00:29:14.771 Removing: /var/run/dpdk/spdk_pid707099 00:29:14.771 Removing: /var/run/dpdk/spdk_pid707363 00:29:14.771 Removing: /var/run/dpdk/spdk_pid708148 00:29:14.771 Removing: /var/run/dpdk/spdk_pid708964 00:29:14.771 Removing: /var/run/dpdk/spdk_pid714349 00:29:14.771 Removing: /var/run/dpdk/spdk_pid714634 00:29:14.771 Removing: /var/run/dpdk/spdk_pid717156 00:29:14.771 Removing: /var/run/dpdk/spdk_pid720988 00:29:14.771 Removing: /var/run/dpdk/spdk_pid723045 00:29:14.771 Removing: /var/run/dpdk/spdk_pid729460 00:29:14.771 Removing: /var/run/dpdk/spdk_pid735310 00:29:14.771 Removing: /var/run/dpdk/spdk_pid736505 00:29:14.771 Removing: /var/run/dpdk/spdk_pid737173 00:29:14.771 Removing: /var/run/dpdk/spdk_pid747415 00:29:14.771 Removing: /var/run/dpdk/spdk_pid749646 00:29:14.771 Removing: /var/run/dpdk/spdk_pid774204 00:29:14.771 Removing: /var/run/dpdk/spdk_pid777009 00:29:14.771 Removing: /var/run/dpdk/spdk_pid778131 00:29:14.771 Removing: /var/run/dpdk/spdk_pid779383 00:29:14.771 Removing: /var/run/dpdk/spdk_pid779525 00:29:14.771 Removing: /var/run/dpdk/spdk_pid779657 00:29:14.771 Removing: /var/run/dpdk/spdk_pid779790 00:29:14.771 Removing: /var/run/dpdk/spdk_pid780225 00:29:14.771 Removing: /var/run/dpdk/spdk_pid781433 00:29:14.771 Removing: /var/run/dpdk/spdk_pid782156 00:29:14.771 Removing: /var/run/dpdk/spdk_pid782585 00:29:14.771 Removing: /var/run/dpdk/spdk_pid784193 00:29:14.771 Removing: /var/run/dpdk/spdk_pid784619 00:29:14.771 Removing: /var/run/dpdk/spdk_pid785063 00:29:14.771 Removing: /var/run/dpdk/spdk_pid787567 00:29:14.771 Removing: /var/run/dpdk/spdk_pid794155 00:29:14.771 Removing: /var/run/dpdk/spdk_pid796918 00:29:14.771 Removing: /var/run/dpdk/spdk_pid800726 00:29:14.771 Removing: /var/run/dpdk/spdk_pid801673 00:29:14.771 Removing: /var/run/dpdk/spdk_pid802762 00:29:14.771 Removing: /var/run/dpdk/spdk_pid805443 00:29:14.771 Removing: /var/run/dpdk/spdk_pid807702 00:29:14.771 Removing: /var/run/dpdk/spdk_pid812061 00:29:14.771 Removing: /var/run/dpdk/spdk_pid812063 00:29:14.771 Removing: /var/run/dpdk/spdk_pid814861 00:29:14.771 Removing: /var/run/dpdk/spdk_pid814994 00:29:14.771 Removing: /var/run/dpdk/spdk_pid815137 00:29:14.771 Removing: /var/run/dpdk/spdk_pid815511 00:29:14.771 Removing: /var/run/dpdk/spdk_pid815517 00:29:14.771 Removing: /var/run/dpdk/spdk_pid818285 00:29:14.771 Removing: /var/run/dpdk/spdk_pid818621 00:29:14.771 Removing: /var/run/dpdk/spdk_pid821250 00:29:14.771 Removing: /var/run/dpdk/spdk_pid823153 00:29:14.771 Removing: /var/run/dpdk/spdk_pid826677 00:29:14.771 Removing: /var/run/dpdk/spdk_pid830536 00:29:14.771 Removing: /var/run/dpdk/spdk_pid837038 00:29:14.771 Removing: /var/run/dpdk/spdk_pid841532 00:29:14.771 Removing: /var/run/dpdk/spdk_pid841534 00:29:14.771 Removing: /var/run/dpdk/spdk_pid853926 00:29:14.771 Removing: /var/run/dpdk/spdk_pid854331 00:29:14.771 Removing: /var/run/dpdk/spdk_pid854854 00:29:14.771 Removing: /var/run/dpdk/spdk_pid855268 00:29:14.771 Removing: /var/run/dpdk/spdk_pid855844 00:29:14.771 Removing: /var/run/dpdk/spdk_pid856256 00:29:14.771 Removing: /var/run/dpdk/spdk_pid856677 00:29:14.771 Removing: /var/run/dpdk/spdk_pid857192 00:29:14.771 Removing: /var/run/dpdk/spdk_pid859709 00:29:14.771 Removing: /var/run/dpdk/spdk_pid859850 00:29:14.771 Removing: /var/run/dpdk/spdk_pid864292 00:29:14.771 Removing: /var/run/dpdk/spdk_pid864464 00:29:14.771 Removing: /var/run/dpdk/spdk_pid866070 00:29:14.771 Removing: /var/run/dpdk/spdk_pid871127 00:29:14.771 Removing: /var/run/dpdk/spdk_pid871135 00:29:14.771 Removing: /var/run/dpdk/spdk_pid874055 00:29:14.771 Removing: /var/run/dpdk/spdk_pid875452 00:29:14.771 Removing: /var/run/dpdk/spdk_pid876855 00:29:14.771 Removing: /var/run/dpdk/spdk_pid877601 00:29:14.771 Removing: /var/run/dpdk/spdk_pid879056 00:29:14.771 Removing: /var/run/dpdk/spdk_pid879885 00:29:14.771 Removing: /var/run/dpdk/spdk_pid885352 00:29:14.771 Removing: /var/run/dpdk/spdk_pid885698 00:29:14.771 Removing: /var/run/dpdk/spdk_pid886084 00:29:14.772 Removing: /var/run/dpdk/spdk_pid887643 00:29:14.772 Removing: /var/run/dpdk/spdk_pid887933 00:29:14.772 Removing: /var/run/dpdk/spdk_pid888323 00:29:14.772 Removing: /var/run/dpdk/spdk_pid890781 00:29:14.772 Removing: /var/run/dpdk/spdk_pid890790 00:29:14.772 Removing: /var/run/dpdk/spdk_pid892244 00:29:14.772 Removing: /var/run/dpdk/spdk_pid892724 00:29:14.772 Removing: /var/run/dpdk/spdk_pid892850 00:29:14.772 Clean 00:29:14.772 16:06:12 -- common/autotest_common.sh@1451 -- # return 0 00:29:14.772 16:06:12 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:29:14.772 16:06:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:14.772 16:06:12 -- common/autotest_common.sh@10 -- # set +x 00:29:15.030 16:06:12 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:29:15.030 16:06:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:15.030 16:06:12 -- common/autotest_common.sh@10 -- # set +x 00:29:15.030 16:06:12 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:15.030 16:06:12 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:15.030 16:06:12 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:15.030 16:06:12 -- spdk/autotest.sh@391 -- # hash lcov 00:29:15.030 16:06:12 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:15.030 16:06:12 -- spdk/autotest.sh@393 -- # hostname 00:29:15.030 16:06:12 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:15.030 geninfo: WARNING: invalid characters removed from testname! 00:29:47.092 16:06:40 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:47.092 16:06:44 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:50.372 16:06:47 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:52.900 16:06:49 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:56.177 16:06:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:58.712 16:06:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:01.999 16:06:58 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:01.999 16:06:58 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.999 16:06:58 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:01.999 16:06:58 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.999 16:06:58 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.999 16:06:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.999 16:06:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.000 16:06:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.000 16:06:58 -- paths/export.sh@5 -- $ export PATH 00:30:02.000 16:06:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.000 16:06:58 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:02.000 16:06:58 -- common/autobuild_common.sh@444 -- $ date +%s 00:30:02.000 16:06:58 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720793218.XXXXXX 00:30:02.000 16:06:58 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720793218.j8tKkQ 00:30:02.000 16:06:58 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:30:02.000 16:06:58 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:30:02.000 16:06:58 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:02.000 16:06:58 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:02.000 16:06:58 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:02.000 16:06:58 -- common/autobuild_common.sh@460 -- $ get_config_params 00:30:02.000 16:06:58 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:30:02.000 16:06:58 -- common/autotest_common.sh@10 -- $ set +x 00:30:02.000 16:06:58 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:02.000 16:06:58 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:30:02.000 16:06:58 -- pm/common@17 -- $ local monitor 00:30:02.000 16:06:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:02.000 16:06:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:02.000 16:06:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:02.000 16:06:58 -- pm/common@21 -- $ date +%s 00:30:02.000 16:06:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:02.000 16:06:58 -- pm/common@21 -- $ date +%s 00:30:02.000 16:06:58 -- pm/common@25 -- $ sleep 1 00:30:02.000 16:06:58 -- pm/common@21 -- $ date +%s 00:30:02.000 16:06:58 -- pm/common@21 -- $ date +%s 00:30:02.000 16:06:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720793218 00:30:02.000 16:06:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720793218 00:30:02.000 16:06:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720793218 00:30:02.000 16:06:58 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720793218 00:30:02.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720793218_collect-vmstat.pm.log 00:30:02.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720793218_collect-cpu-load.pm.log 00:30:02.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720793218_collect-cpu-temp.pm.log 00:30:02.000 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720793218_collect-bmc-pm.bmc.pm.log 00:30:02.596 16:06:59 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:30:02.596 16:06:59 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:30:02.596 16:06:59 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:02.597 16:06:59 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:02.597 16:06:59 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:02.597 16:06:59 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:02.597 16:06:59 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:02.597 16:06:59 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:02.597 16:06:59 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:02.597 16:06:59 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:02.597 16:06:59 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:02.597 16:06:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:30:02.597 16:06:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:30:02.597 16:06:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:02.597 16:06:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:02.597 16:06:59 -- pm/common@44 -- $ pid=902969 00:30:02.597 16:06:59 -- pm/common@50 -- $ kill -TERM 902969 00:30:02.597 16:06:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:02.597 16:06:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:02.597 16:06:59 -- pm/common@44 -- $ pid=902971 00:30:02.597 16:06:59 -- pm/common@50 -- $ kill -TERM 902971 00:30:02.597 16:06:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:02.597 16:06:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:02.597 16:06:59 -- pm/common@44 -- $ pid=902973 00:30:02.597 16:06:59 -- pm/common@50 -- $ kill -TERM 902973 00:30:02.597 16:06:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:02.597 16:06:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:02.597 16:06:59 -- pm/common@44 -- $ pid=903003 00:30:02.597 16:06:59 -- pm/common@50 -- $ sudo -E kill -TERM 903003 00:30:02.857 + [[ -n 547962 ]] 00:30:02.857 + sudo kill 547962 00:30:02.867 [Pipeline] } 00:30:02.886 [Pipeline] // stage 00:30:02.891 [Pipeline] } 00:30:02.910 [Pipeline] // timeout 00:30:02.915 [Pipeline] } 00:30:02.933 [Pipeline] // catchError 00:30:02.939 [Pipeline] } 00:30:02.955 [Pipeline] // wrap 00:30:02.961 [Pipeline] } 00:30:02.975 [Pipeline] // catchError 00:30:02.983 [Pipeline] stage 00:30:02.986 [Pipeline] { (Epilogue) 00:30:03.025 [Pipeline] catchError 00:30:03.027 [Pipeline] { 00:30:03.042 [Pipeline] echo 00:30:03.044 Cleanup processes 00:30:03.050 [Pipeline] sh 00:30:03.336 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:03.336 903119 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:03.336 903238 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:03.351 [Pipeline] sh 00:30:03.638 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:03.638 ++ grep -v 'sudo pgrep' 00:30:03.638 ++ awk '{print $1}' 00:30:03.638 + sudo kill -9 903119 00:30:03.650 [Pipeline] sh 00:30:03.937 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:12.051 [Pipeline] sh 00:30:12.335 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:12.335 Artifacts sizes are good 00:30:12.348 [Pipeline] archiveArtifacts 00:30:12.354 Archiving artifacts 00:30:12.576 [Pipeline] sh 00:30:12.884 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:12.898 [Pipeline] cleanWs 00:30:12.908 [WS-CLEANUP] Deleting project workspace... 00:30:12.908 [WS-CLEANUP] Deferred wipeout is used... 00:30:12.914 [WS-CLEANUP] done 00:30:12.916 [Pipeline] } 00:30:12.935 [Pipeline] // catchError 00:30:12.948 [Pipeline] sh 00:30:13.225 + logger -p user.info -t JENKINS-CI 00:30:13.234 [Pipeline] } 00:30:13.252 [Pipeline] // stage 00:30:13.257 [Pipeline] } 00:30:13.276 [Pipeline] // node 00:30:13.282 [Pipeline] End of Pipeline 00:30:13.315 Finished: SUCCESS